Visualization and Interactions
This theme gathers the activities of the research group around human-computer interaction in immersive environment and visualisation of complex 3D models. The main goal of our studies concerning visualisation consists in conceiving and implementing new methods allowing to compute realistic and / or intelligible images based on numerical representation of 3D models. Our works concerning interaction in virtual environments takes advantage of the expertise in visualisation and 3D modeling. Based on the works of these themes, we developped a 3D modelisation platform within an immersive environment. This platform, named VRModelis, implements all the innovative interaction methods we developped. All of them are conceived to obtain the right precision level and confort necessary for modelisation tasks.
- 2 Professors: Dominique Bechmann and Jean-Michel Dischler
- 3 Assistant professors: Antonio Capobianco, Caroline Essert, Jérôme Grosjean
- 1 Research engineers: Olivier Génevaux
- 1 Associate researcher: Murielle TORREGROSSA (MC IUT RS, -12/2012)
- 7 Post-doctoral: Vincent BAUDET (CNRS, 11/2006-10/2007), Arnaud FABRE (ATER, 12/2006-08/2007), Guillaume GILET (ANR ATROCO, 10/2009-08/2010), Amel GUETAT (ATER, 09/2007-08/2009), Jérôme SIMONIN (ANR DNA, 07/2009-06/2010), Ludovic STERNBERGER (ATER, 12/2006-08/2007), Manuel VEIT (ATER, 10/2010-08/2011)
- 2 PhD candidates: Alexandre ANCEL (Ministry of Research grant, 10/2008-), Jonathan WONNER (Normalien, 10/2010-)
- 4 Former PhD candidates: Lucas AMMANN (Ministry of Research grant, 10/2006-09/2010), Guillaume GILET (IRCAD Contract, 01/2006-09/2009), Stéphane MARCHESIN (Ministry of Research grant, 10/2003-07//2007), Manuel VEIT (Ministry of Research grant, 10/2007-09/2010),
In visualisation, we are essentially interested in direct volumic rendering of 3D scalar data obtained from tomography to facilitate exploitation, exploration and comprehension of this kind of data (one intended application is the scientific visualisation of medical data obtained by IRM for example).
We aim to increase the rendering interactivity by using the features of modern graphical process units (clustered or not). Besides the data obtained by tomography, we also consider volumic textures generally virtually synthesized or other kinds of data such as heightfields objects.
Scientific applications (essentially scientific visualisation) have been initiated by J.-M. Dischler in the group project INRIA CALVI that he joined at its creation in 2003, and quitted in 2007 to broaden the visualisation applications to the medical field - CALVI concerned essentially fluids and plasma data. Scientific visualisation activities took core around the MASSIM ANR project J.-M. Dischler was leading. This new theme also established a collaboration with the ICPS group of the LSIIT (co-directing 2 theses, one is in progress). This activity also opened a new contract with the CEA --Center for Atomic Energy-- (DAM / Bruyères-le-Châtel). Many contributions have been proposed to increase the visibility of internal structures within 3D data, the visual quality of direct volumic rendering 4-MDM07,4-EMDM08,4-HZLD08,2-GAMD10 and 2-MDM10 and to parallelize the treatments 4-MMD06,4-MMD08. Linked to the modelisation and acquisition, tools to design procedural volumic textures have been proposed 2-GD09,2-GD10. New approaches for transparent media rendering 4-GLD06 and heightfields rendering 4-AGD10 have been explored. Activities in direct volumic rendering are currently pursued with the thesis of Alexandre Ancel and concern the efficient handling of advanced lightning methods 4-ADM10.
Relying on the analysis and evaluation of theoretical models of 3D interaction, we realised an experimental analysis based on the notion of degrees of freedom 4-VCB09. To that purpose, we developed a new metric for measuring the degree of integration of the various DOF involved in a task. We have shown that it was efficient to integrate the manipulation of the DOF in a single command when the level of precision required is low. However our results show that it is better to separate the control of the dimensions of manipulation for a task that requires a higher level of precision 4-VCB11. This work has been performed during the thesis of M. Veit, directed by D. Bechmann and A. Capobianco and resulted in a major publication in the main conference of this domain, IEEE VR11 2-VCB11.
Based on these results, we propose several new interaction techniques that allow an increased precision in order to realise modeling tasks in an immersive environment including: DIOD (Dynamic Decomposition and integration of degrees of freedom) 4-VCB10, CROS (Cursor On Surface) 8-Vei10.
We also proposed several studies on haptic feedback. The first one was about a haptic 3D menu, dedicated to applications using this type of force feedback peripherals. The 2 main goals were to allow the user to control the application with the same peripheral without going forth and back to the mouse, and to take advantage of the force feedback to assist the gesture while controling the application with 3D controls 4-CE10, 4-EC10, 4-EC09.
The second one was dealing with using haptic feedback to purposes that differed from the usual use : 99% of applications use haptic feedback for realistic simulations or assistance. We wished to take advantage of force feedback to materialize properties in the 3D space, either because they are not easy to display on a screen, or because we want to reduce the visual overload as too much information is often displayed on the screen simultaneously. Therefore, we are currently studying how to render some numerical values using force feedback, by considering various kinds of possible haptic responses (vibrations, viscosity, elastic or magnetic feedback,...) and studying their perception thresholds.
During the project ANR DNA, we developped an immersive platform for modelisation and edition of grounds. Our ground model is a high scale voxel partition of space which allows to represent complex 3D structures such as overhangs, archs or caves. The immersive edition of such a 3D model leads to new interaction and visualisation problems. Existing methods for selection and manipulation do not allow to select arbitrary 3D position, represented or not in the environment, visible or occluded, near or far. We conceived and developed a hybrid 3D selection method combined with a semi-automatic viewpoint, which also allows navigation. This method is organized around a 3D pointer similar to a mouse pointer for immersive environment. This pointer, named 3D arrow, presented as a poster at IEEE VR2011 4-GSGM11, has been evaluated and validated by users in a task consisting in following 3D curves similar to trajectories encountered in edition.
New approaches in visualisation and interaction
In the domain of visualisation, we would like to improve the interactivity and quality of rendering. Data obtained by tomography or artificial data require more and more memory. Data is also more frequently multimodal. Without considering the requirements to efficiently handle important memory representation, displaying clearly information is still a problem. We would like to propose visualisation methods which allow to display the data by adapting classical direct volumic rendering methods. We also focus on improving the lighting computation quality. To validate lighting parameters, we plan to use psycho-visual studies to validate better configuration (diffuse, ponctual lighting, position, intensity, shading, etc.) on final users. This goal follows the same pattern as the 3D interaction by centering the validation on users.
Concerning 3D interaction, we would like to use our models of integration of the DoFs in order to analyse interaction strategies when using tactile interfaces for 3D interaction, considering the manipulation space is then limited to 2 DoFs. Our objective is to propose efficient techniques to allow the manipulation of all the DoFs of the task in the most intuitive and efficient way. Indded, some techniques that we propose (CROS for example) would be well suited for this type of devices. We already performed an initial study that demonstrated the relevance of this approach and we would like to further investigate this topic in order to propose scientific evidences to the VR and Interaction community.
In the framework of haptic peripherals, we consider pursuing our efforts on 3D haptic menus and materialization of properties. On the 1st topic, we wish to enhance our menu, and compare it with classical 3D menus such as pie menus for instance. On the 2nd topic, we are going to experiment our approches for the materialization of properties on an application of automatic surgical planning (see Specifications, constraints and proofs ), in order to represent the space of possible solutions with other means than visual, as many other information is already displayed on the 3D view (numerous anatomical structures, possibly displayed with different degrees of transparency and superimposed).