Geometric Modeling, Simulation and Interaction Assessment2016-2021
Objectives and challenges
The work of the "Geometric Modeling, Simulation and Interaction" theme aims to tackle two major issues. The first concerns the development of tools for the geometric and topological modelling and analysis of 3D objects, both generic and robust enough to be adapted to concrete cases. The second relates to human-machine interaction in a virtual environment, taking into account both human perception of this environment and human-to-machine communication.
- Three full professors: Dominique Bechmann, David Cazier and Franck Hétroy-Wheeler
- One senior researcher: Birgitta Dresp-Langley (2019-2020)
- Gutenberg Chair 2019, Daniel Oberfeld-Twistel, Associate Professor at Johannes Gutenberg - Universität Mainz (Institute of Psychology)
- Three associate professors: Antonio Capobianco, Jérôme Grosjean and Pierre Kraemer
- Three research engineers: Thierry Blandet, Sylvain Thery and Joris Ravaglia (since 10/2020)
- One engineer from the GEOSIRIS company since 01/09/2018, Lionel Untereiner (research engineer associated with the team since 8/06/2020)
- Postdoctoral students: Sabah Boustila (from 15/01/2020 to 31/12/2020), Flavien Lecuyer (teaching associate IUT Haguenau 2020-2021), Joris Ravaglia (Unistra Idex 2018 - Attractivity programme from 10/2019 to 09/2020)
- PhD students: Paul Viville (Unistra doctoral contract from 10/2019 to 09/2022), Quentin Wendling (Unistra doctoral contract from 10/2019 to 09/2022), Julien Casarin (CIFRE grant with Gfi-Labs from 2016 to 2019. Defended the 30th of September 2019), Alexandre Hurstel (Contract through the 3D-Surg project from 2015 to 2019. Defended the 30th of September 2019), Sabah Boustila (Contract through the CIMBEES project from 2012 to 2015. Defended the 25th of May 2016), Mélinda Boukhana (CIFRE grant with Arvalis from 03/2018 to 12/2021. Defended the 15th of December 2021), Katia Mirande (Contract through the ROMI project from 11/2018 to 04/2022).
Hexahedral mesh generation
The construction of a volumetric mesh for a given geometric domain is a complex problem that has been tackled for many years. The generation of purely hexahedral meshes for domains of any shape is still an open problem. We have developed a processing chain for the generation of hexahedral meshes for domains whose shape can be represented by their skeleton. By exploiting this representation, the generated mesh is well aligned with the geometry of the domain and its connectivity is as regular as possible. The main difficulty lies in the processing of the connectivity of the mesh at the intersections of the skeleton which can have any number of incident branches. We have proposed a new solution with the construction of connection surfaces that encode the connectivity of the final volume mesh around each of the vertices of the skeleton. Each vertex is processed independently by choosing, according to the local configuration, an appropriate method. Since these methods are mutually compatible, no global constraints system needs to be solved. In the end, the proposed processing chain makes it possible to manage complex shapes even in the presence of cycles and many steps can process the cells in parallel. Compared to the state of the art, the quality of the meshes obtained is at least similar if not better. The implemented algorithm is robust to the presence of cycles and generates more symmetric and regular results in many cases. [7-VKB20] [4-VKB21]
Analysis of plant 3D point clouds
In fundamental as well as applied biology, the accurate measurement of geometric features of plants and trees (height, angles of insertion of branches, lengths between nodes, areas of leaves, etc.) is necessary to understand the interaction of a plant with its environment (phenotyping) or the mechanisms underlying its growth, as well as to validate structure-function models (FSPM). In recent years, the development of passive (photogrammetry) or active (laser scanners) acquisition systems has made it possible to limit manual measurements which are costly and require cutting and therefore destroying the plants. These systems generate a virtual model of the object under study in the form of a three-dimensional point cloud. Such a cloud of points must then undergo various geometric treatments: denoising, decomposition into botanical organs (stem or trunk, branches, petioles, leaves), reconstruction of a surface model for each organ, temporal tracking, etc.
Our contribution to such a processing chain is threefold. We propose a method for the robust segmentation of a plant point cloud into botanical organs, based on both a spectral analysis of the point cloud and the incorporation of botanical knowledge (PhD thesis of Katia Mirande, 2018-2022, [7-MHG20]). A spectral analysis makes it possible to determine very precisely the boundaries between organs, in particular between the petiole and the blade of a leaf. The use of prior knowledge on the structure of the plant in a probabilistic algorithm globally improves the robustness of the segmentation. Our second contribution concerns the reconstruction of the surface of a limb, with the aim of precisely estimating its area (PhD thesis of Mélinda Boukhana, 2018-2021, [7-BRHL20]). In particular, we conducted a comparative study between seven classical methods for surface reconstruction, according to nine criteria that we defined for the occasion and using an original synthetic model of a 3D point cloud of a leaf blade. We show that none of these methods robustly estimate the area of a limb, and we propose a more efficient alternative approach based on a Bézier surface. Finally, we propose a multi-scale algorithm for point-to-point monitoring of 3D point clouds of growing plants (Haolin Pan's Master thesis, [4- PHCC21]). Our algorithm is based on the computation of a topological skeleton of the plant. This work was carried out in collaboration with plant specialists, respectively C. Godin (Inria Lyon), B. de Solan (Arvalis) and D. Colliaux (Sony Computer Science Laboratory).
Perception of spatial arrangement in virtual environments
Virtual reality has shown many advantages in several areas, in particular the architectural field. However, previous work has proven that distances are poorly perceived whether in indoor or outdoor spaces. Several visual cues can affect and improve our perception of distances such as surface colors, etc. The Gutenberg project led by Daniel Oberfeld-Twistel aims to examine how acoustic properties of interior spaces influence the perception of size and spaciousness. Previous research has shown that acoustic properties have an effect on the perceived spatial arrangement of an interior space and that, to some extent, the dimensions of a room can be estimated by listening to a sound source inside the room. However, our understanding of the interaction between spatial layout and room acoustic properties remains limited. The objective of the project is to identify the principles underlying the perception of interior spaces from auditory or audiovisual information and thus answer several questions. Can humans estimate the height, width, depth, volume and shape of space when only auditory information is available? Is it also possible to make a small room seem more spacious by manipulating its acoustic characteristics?
In the experiments, we present virtual rooms that are empty, windowless and contain a door to provide a familiar visual cue. The shape of the rooms is rectangular. We vary the room size (width, depth, and height). Each size is presented in two acoustic configurations: either with a high amount of reverberation (low sound absorption of the room surfaces) or with a low amount of reverberation (high sound absorption of the room surfaces). The participants view the rooms on a virtual reality headset and listen to sound sources inside the space (male and female talkers, musical instruments). The interactive acoustic virtual-reality simulations are created with state-of-the-art room-acoustical simulations (https://www.akustik.rwth-aachen.de/go/id/dwoc/lidx/1/file/183613) and presented via real-time binaural rendering (http://virtualacoustics.org/) with head-tracking, providing a physically plausible spatial sound field. The task is to estimate the perceived width, depth, height, and volume of the rooms in units of the metric system (meters or centimeters).
As part of Sabah Boustila's postdoc, a first experiment examined the auditory perception of the room dimensions. In the visual simulations, the rooms were completely dark so that the room surfaces could not be seen. Thus, the participants needed to estimate the room size by listening to the sound sources inside the spaces. The results showed that the size judgments were largely driven by the reverberation time. Interior spaces with a higher reverberation time were estimated to be larger than spaces with a lower reverberation time. However, the participants were not able to accurately perceive the variations in width, depth, and height. These results are compatible with the few previous studies that investigated the auditory perception of interior spaces, but also significantly extend previous work. In the following experiments, we will present the rooms both auditory and visually, to investigate if the auditory characteristics of a room have an influence on its perceived size, even when full visual information is available.
Intuitive and contactless interaction
During image-guided surgery, the consultation of medical images, in 2D or 3D, plays a crucial control role for the surgeon but induces a loss of sterility due to the contact with the keyboard and the mouse. To avoid this problem, in the framework of the BPI 3D-Surg project, we have proposed a conceptual approach [2-HB19] to design an intuitive and efficient gestural vocabulary, well adapted to the specificities of the surgical context, allowing the use of non-contact technologies. The key idea is to observe the gestures that the user instinctively produces when faced with the task of having to produce a given interaction. In doing so, each subject provided us with their own gestural vocabulary through an elicitation approach (Wizard of Oz). The gestures collected were then analyzed to identify those that were most frequently used by all and to deduce a general but intuitive gestural vocabulary, specific to the application tested, and best satisfying the contextual, conceptual and technical criteria we had previously identified. The second step of our approach consists in having the previous generic vocabulary evaluated by the target users (surgeons), using the feedback from the experts to further develop it in order to optimize ease of use, memorization, consistency, performance and comfort of postures [8-Hurs19].
Collaborative working in mixed reality
Recent advances in virtual reality technology have given rise to a diverse set of media that allow people to visualize virtual environments and interact by manipulating 3D content. Our objective is to enable the creation of virtual experiences, accessible from any medium, in order to offer collaborative work involving actors with different professions and roles, and therefore different needs in terms of visualization and, above all, interaction with 3D content. This is why a single device or support is not sufficient to support collaborative work. For this purpose, we have designed the UMI3D protocol [4-CLTB19], an interface between contents and supports, allowing any support to interact with a remote 3D content, hosted on a server. The operation of the UMI3D exchange protocol is mainly based on the establishment of interaction contracts between the 3D content and the media. The server communicates with the media through a bidirectional web connection established between the 3D content and the UMI3D browser dedicated to the media. The server then describes to the browser a set of interaction contracts that define the possible interactions at that moment for the user. These contracts are defined by the transmission, standardized in JSON format, of a combination of objects that we call an interaction brick. The notion of interaction bricks is based on our study of degrees of freedom published at IEEE VR'2011 [4-VCB11]. These bricks constitute a reduced number of classes allowing to describe the possible interactions in the virtual environment. When the browser receives the interaction contract, it performs a so-called projection operation, to generate or update the media-specific user interface, and thus make it possible for the user to fulfill the interaction contract. In the framework of Julien Casarin's CIFRE thesis [8-Casa19], we have also implemented the UMI3D model with the Unity3D game engine [4-CPB18 (honorable mention at CSCW2018), 6-CGPS18] and the result is a Unity toolkit that can be used to design collaborative interactions.