Team IGG : Computer Graphics and Geometry

Appearance and Movement Assessment2016-2021

From Team IGG : Computer Graphics and Geometry
Revision as of 13:49, 7 April 2022 by Allegre (talk | contribs)
Jump to navigation Jump to search

Objectives / Challenges

Our first challenge is to develop tools to automatically produce textures "by example", i.e. from texture samples extracted from images, and to exploit them in a real-time editing and rendering context. For this purpose, we have to devise novel image analysis tools, procedural models, and specific filtering techniques. The second challenge deals with the development of methods for the analysis of static and moving shapes, with the goal of performing regression based on semantic parameters, animation compression, shape reconstruction from sketches, or the adaptation of radiotherapy protocols.

Participants

  • 1 CNRS researcher: Hyewon Seo (-march 2019)
  • 2 full professors: Jean-Michel Dischler, Franck Hétroy-Wheeler
  • 4 associate professors: Rémi Allègre, Arash Habibi, Basile Sauvage, Jonathan Sarton
  • 1 associate assistant professor: Frédéric Cordier, UHA (-janv. 2020)
  • 2 research engineers: Sylvain Thery, Frédéric Larue (-oct. 2020) then Joris Ravaglia (oct. 2020-)
  • Postdoctoral students: Xavier Chermain (déc. 2019-)
  • PhD students: Geoffrey Guingo (oct. 2015-déc. 2019), Pascal Guehl (oct. 2016-), Cédric Bobenrieth (oct. 2016-déc. 2019), Nicolas Lutz (oct. 2017-), Florian Allender (oct. 2018-), Charline Grenier (oct. 2020-), Guillaume Baldi (oct. 2020-)

Results

Textures, rendering and visualization

In the texture field, our goal was to enable the visualisation of 3D models with appearance in arbitrary lighting environments, requiring more sophisticated analysis tools to extract intrinsic material properties and classify them. In addition to improving the appearance quality, the analysis of the input data should allow us to produce more controllable content textures (such as textures with spatial variations that depend on the position in the texture and the position on the surface of a 3D model). Analysis from the acquisition of 3D models has been abandoned in favour of texture image analysis due to the appearance of banks of textures and materials accessible online, and a great variety in terms of content. The focus has been pushed on on-the-fly and procedural texture and material synthesis, with a specific attention to perception [2-VSKL17], spatial variations [2-GSDC17]-[4-GSDC17] (G. Guingo's PhD started on 1/10/2015) [2-LSD21] (N. Lutz's PhD started on 1/10/2018), control [2-GADB20]-[4-GADB20] (P. Guehl's PhD started on 1/10/2016), and dynamic effects [2-GLSL20] (G. Guingo's PhD). The emergence of Deep Learning has also led us to consider this approach to improve analysis methods and synthesis, with especially applications in histopathological imaging (PhD of F. Allender started on 1/10/2018 [7-AAWD20] and PhD of G. Baldi started on 1/10/2020). In order to perpetuate our work, to make it reproducible and to contribute to open science, we have developed an open source texture synthesis analysis library ASTex and made a database of procedural structures available to the community [2-GADB20]-[4-GADB20].

[2-GSDC17]-[4-GSDC17] Texture deformation provides variety as long as the small-scale appearance is preserved. Rotation and stretching are applied using the usual one-layer model (top) and our two-layer model: our model allows structured textures to be deformed while preserving small-scale details (noise patterns).
[2-GADB20]-[4-GADB20] Semi-procedural texture synthesis. Top: binary structure maps generated using our procedural model; bottom: result of interpolation between two structures and result of a texture synthesis guided by this structure.
[2-GLSL20] Left: a curved textured cylinder with snake scales, deformed according to the tensor calculated by our method. The intensity of the deformation is shown below (blue: no deformation; green: stretching; red: compression). Right: texture visualised in parametric space. Top: with our method, the scales are deformed according to their stiffness. Bottom: no control is applied, giving a less realistic result.

The skills in procedural texture synthesis and texture filtering for rendering [4-LSLD19,2-LSD21] (PhD N. Lutz) opened the opportunity to collaborate with the Computer Graphics team at the Karlsruher Institut für Technologie, and thus to extend our field of expertise to photo-realistic rendering of materials: we hired X. Chermain as a postdoctoral fellow (2019, Idex Unistra 2019 awarded project) and Ch. Grenier as a PhD student (2020, ANR PRCI ReProcTex project). Initial work on a physics-based and procedural BRDF for real-time flicker rendering has been published [2-CSDD20]-[4-CSDD20, 4-CLSD21].

[2-CSDD20]-[4-CSDD20] A 2CV car is rendered with our BRDF using coloured glitters to simulate the paintwork of the chassis.

Moreover, the joining of J. Sarton on 1/09/2019 allows us to relaunch our activities in the field of visualization, and more specifically the high performance interactive visualization of volume data [7-Sart19].

Statistical analysis of deformations of human clothing in motion.

Thanks to modern technologies, especially multi-camera systems, it is possible to represent a person in virtual movement, in the form of a sequence of meshes without temporal coherence. The use of a statistical model then allows the geometry and the movement of the human body to be recovered, even if the studied person is dressed in loose clothing. We were interested in the analysis of the movement of these clothes and proposed a two-component model, on the one hand to reduce the dimension of the observed information space and on the other hand to regress from any semantic parameter to the parameterization space of the clothes. This work was presented at the highly selective ECCV conference in 2018 [4-YFHW-18].

3D mesh animation, compression

After having worked on some fundamental aspects of shape analysis during the commitment of ANR SHARED (2010-2015), we have investigated an extension of those works towards the compression of deforming mesh, in collaboration with Guoliang Luo, our former PhD student and now an associate professor at the East China Jiaotong University, and other international colleagues (China and USA) [4-LDJZ19]. Our strategy is to exploit the spatio-temporal coherency within a mesh segment towards a compact represent an animation mesh. Given a 3D mesh on which a motion-driven spatio-temporal segmentation has been computed, we perform a PCA-based compression on each spatio-temporal segment. Since our algorithm is designed to exploit both temporal and spatial segmentation redundancies by adaptively determining segmentation boundaries, it shows a significantly better performance over other comparable compression methods, as expected.

[4-LDJZ19]: The reconstruction errors of the compression by using our method, ‘Adapted Soft’, ‘Adapted PCA’, ‘Original Soft’ [Karni and Gotsman 2004] and the ‘Original Simple’ [Sattler et al. 2005]. The colorbar indicates the reconstruction errors from low (blue) to high (red).

Validation of computational features

Another work related to these is the validation of computational mesh saliency by using an eye-tracker, which has been the result of collaboration with Guillaume Lavoué (LIRIS), Frederic Cordier (LMIA), and Chaker Larabi (XLIM) [2-LCSL18]. In search of validating our dynamic feature extraction work, we set our goal to build the ground truth by analyzing the human visual saliency using an eye-tracker. 2D eye fixations (i.e., where humans gaze on the screen) have been mapped onto the 3D shapes, which has served as ground-truth fixations that we used to evaluate the performance of four representative state-of-the-art mesh saliency estimator models. We have shown that, even combined with a center-bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations.

[2-LCSL18]: The human saliency from eye-tracker experiments are mapped on the 3D mesh (2nd color map from the left) and compared with state-of-the-art saliency estimator models. Pearson correlation (ρ) between saliency maps from humans and algorithms are visualized as color maps.

Reconstruction of flowers from sketches

Together with Cedric Bobenrieth (doctorant, defended in December 2019) and other co-encadrants, we have focused on the sketch-based reconstruction of floral objects [2-BSCH18], one of the most popular subjects in artistic drawing. Although one can make a strong structural assumption over floral objects, they exhibit a large variety of shapes and geometrical complexity, making the problem challenging. Differently from existing flower modelers which require users to work with different views providing step-by-step sketch input, we develop a new modeler where user can rapidly create quality geometric models of flowers by drawing a single-view sketch of arbitrary viewpoint. To our knowledge we are the first to introduce such a system. More recently, we have developed a more generic sketch-based modeler allowing the user to generate 3D shapes of all sorts of objects from sketches without any structural assumption, at the cost of manual, yet minimal intervention [2-BCHS20].

[2-BSCH18]: Results obtained with our modeler.

Volume changes and deformations during breast radiotherapy

In the framework of the stage M2 of Yvan Pin, radiation oncologist, we have developed a software and a measurement protocol for the computer-assisted detection of morphological change of breasts during the post-operative irradiation therapy [4-PSDN18]. The specific goal of this study is to propose a new, personalisable irradiation protocol based on computer-assisted detection of breast volume and shape change throughout the irradiation. Since the proposed new protocol has been authorized by the ethics committee of Centre Paul Strauss in 2017, we have been collecting the surface datasets of approximately 55 subjects undergoing the post-operative irradiation therapy, who have either completed the irradiation sessions or are under continuation. In the plan of continuing the study in close collaboration with radio therapists at Centre Paul Strauss, Hyewon Seo has moved to the hospital site (Clovis Vincent) in 2017, which became an initiative to a new research team oriented towards data and models in medical science, starting from 2021.