Team IGG : Computer Graphics and Geometry

Appearance and Movement

From Team IGG : Computer Graphics and Geometry
Jump to navigation Jump to search

Retour à l'acceuil

Appearance and Movement

Bilan de 2011 à mi-2016 et prospective du thème Apparence et Mouvement

Context

The creation of 3D virtual worlds finds applications in various fields, including technical (3D simulation, virtual prototyping, 3D catalogs and archiving, etc.) as well as artistic areas (e.g. creation of 3D content for entertainment or educational purposes). The target size and level of details of virtual worlds have tremendously increased during the past few years, making it possible to produce highly realistic synthetic images. An enhanced visual realism is obtained by the definition of detailed 3D models (shape and movement) in conjunction with sophisticated appearances. Advances in technologies for shape and movement acquisition (3D scanners) have lead to considerable improvements both in terms of quality and amount of produced 3D models. However there are still some major challenges to address in this field, related to the huge size of the models and the transfer of acquired movements from one model to another one.

The appearance of 3D models is commonly defined by set of "textures" that consist in 2D images mapped onto the surfaces of 3D models, just as wallpaper. Textures enrich 3D surfaces with small scale details: wood grain, bricks for a façade or grass blades. With the increasing definition of display devices, the creation of very high definition textures, i.e. with resolution beyond hundreds of Mega-pixels, has become a necessity. However creating texture of this size with standard per-pixel editing tools is a tedious task for computer graphics artists, which increases production costs. The usage of acquisition techniques is also limited because the conditions required to perform the acquisition of the real materials are very restrictive: controlled lighting, object accessibility, etc.

Objectives / Challenges

Our first challenge is to devise novel methods for the analysis and segmentation of 3D acquisition data, including 3D scans and motion capture data. Our goal is to offer tools that simplify the creation of 3D models, possibly with movement, as well as the construction of statistical atlases. Our second challenge is to develop methods to produce textures "by example", which means that the input data are texture samples extracted from images. We especially want to represent the materials of real objects with textures generated from a set of photographs taken with very few constraints. Our goal is to map these textures onto the surfaces of arbitrary 3D models and under arbitrary lighting conditions for realistic image synthesis applications.

Permanent participants

  • One Professor: Jean-Michel Dischler
  • One Research scientist ("Chargée de recherche"): Hyewon Seo
  • Five Associate Professors ("Maîtres de Conférences"): Rémi Allègre, Karim Chibout, Frédéric Cordier, Arash Habibi, Basile Sauvage
  • Three Research engineers: Frédéric Larue (2011-), Olivier Génevaux (2011-2015), Sylvain Thery (2015-)
  • 5 Doctoral candidates (PhD students): Geoffrey Guingo (fixed-term contract starting Oct. 2015 ERC Marie-Paule CANI et contrat Allegorithmic), Guoliang Luo (Project SHARED contract from Oct. 2011 to Dec. 2014 (PhD thesis defended on Nov. 4th, 2014)), Vasyl Mykhalchuk (Project SHARED contract from Nov. 2011 to Apr. 2015 (PhD thesis defended on April 9th, 2015)), Alexandre Ribard (UNISTRA doctoral contract starting Nov. 2015), Kenneth Vanhoey (UNISTRA doctoral contract from Oct. 2010 to Sept. 2013 (PhD thesis defended on Feb. 18th, 2014)).

Results

Digitization platform

We develop the ExRealis digitization platform. ExRealis offers a range of devices and software tools covering the whole digitization processing pipeline, from data acquisition to the creation of textured 3D models, that also proposes some tools for texture reconstruction and visualization accounting for complex lighting environments or material characteristics. ExRealis also includes a motion capture system for the animation of avatars in virtual reality or biomechanical applications. Our platform covers our needs in terms of acquisition and data processing and also makes it possible to capitalize developments achieved through our scientific production. Frédéric Larue presented a tutorial on appearance acquisition, including a presentation of the ExRealis platform, at the AC3D days of GDR ISIS in May 2015.

Shape analysis, registration, and segmentation of movement data

Shape analysis, registration, and segmentation of movement data: With the recent advances in imaging technologies we now have a growing accessibility to capture the shape and motion of people or organs either by using optical motion capture system or using medical imaging scanners. Taking a step beyond the existing methods that use static shape information for shape analysis [2-MCS13], we have developed novel shape analysis methods that allow exploiting motion information [2-SKCC13]. Our developed techniques are the first who formulate the problems of segmentation [4-LSC14, 4-LLS15], feature extraction [4-MSC14, 2-MSC15], similarity computation [4-LCS14, 8-Luog14, 2-LCS16], and correspondence computation [8-Mykh15] on time-varying surface data.

Appearance reconstruction

2D parametric color functions are widely used in image-based rendering and image relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data (photographs) is promising but difficult. Acquiring data in a dense and uniform fashion is not always possible, while a simpler acquisition process results in sparse, scattered and noisy data. We have developed a method for reconstructing radiance functions for digitized objects from photographs [2-VSGL13]. This method makes it possible to visualize digitized objects in their original lighting environments. We have also developed a method for simplifying meshes with attached radiance functions [2-VSKL15].

Texture modeling and synthesis

Textures are crucial to enhance the visual realism of virtual worlds. In order to alleviate the burden of work for artists who have to design gigantic virtual worlds, we have developed methods to automatically generate high-resolution textures from a single input exemplar, with control over pattern preservation, as well as over content randomness and variety in the resulting texture [2-GDG12, 2-GDG12a, 2-VSLD13, 2-GSVD14]. Among these papers, two in particular [2-VSLD13, 2-GSVD14] have been published in ACM Transactions on Graphics (IF: 3.725), which is the most famous journal in computer graphics. Thanks to these publications, Jean-Michel Dischler was invited to the Dagstuhl Seminar on Real-World Visual Computing (Leibniz-Zentrum für Informatik) in Oct. 2013.

Perspectives

Digitization platform

We want to continue efforts to develop the ExRealis digitization platform, as a hardware and software support to our activities, and also propose service delivery.

Ground truth and prediction models for the feature extraction

Building the ground truth for spatio-temporal features and correspondence on deforming meshes is as open challenges. We are currently building the ground truth on static meshes based on human vision (in collaboration with Xlim in Limoges and LIRIS in Lyon), by using eye-tracker. With the successful construction of the ground truth, we will be able to aim at higher goals such as validation of computational approaches for salience extraction, prediction of eye-movement, and investigate efficient methods for building ground truth for spatio-temporal features.

Atlas construction and prediction models for the movement data

We are concerned with the acquisition, analysis, representation and statistical modeling of 4-dimensional human data. In particular, we will concentrate on developing a statistical model (or atlas) based on compact representations of highly redundant 4D data. Also developed will be the prediction methods for the movement and the shape change, based on the atlas.

Appearance processing and texture synthesis

Our project is to bridge the gap between our activities in the field of appearance reconstruction and our activities in the field of texture synthesis. Our goal is to take digitized objects as examples, and automatically map similar appearances onto other 3D models. We also started to work on dynamic textures for animated objects, in collaboration with Marie-Paule Cani (team INRIA IMAGINE). Multiple technological barriers have to be breached in order to reach these goals.

Regarding the appearance of digitized objects, we want to go beyond the visualization of the digitized objects in their original lighting environments. The next step is to enable visualization in arbitrary lighting environments. This requires to estimate the acquisition environment as well as to classify the intrinsic properties of the materials (diffuse and specular colors, roughness, ambient occlusion, etc.). Materials have to be represented by multiple texture layers compatible with commercially available rendering engines. These problems are being addressed by Alexandre Ribard in his PhD thesis started on Nov. 1st, 2015, advised by Jean-Michel Dischler and co-supervised by Rémi Allègre.

Regarding the representation of appearance, we also want to develop methods to synthesize the appearance of materials from exemplars or following a procedural approach. We are able to synthesize color textures. The joined synthesis of other properties contributing to appearance (e.g. specular color) raises new challenges. In addition, our synthesis methods currently produces the same appearance over the whole texture. We want to generate spatially varying appearances depending on the location in the texture, and also the location on the surface of a 3D model.

Regarding texture synthesis we plan to introduce temporal variations: the generation and the control of dynamic textures on deformable and animated surfaces. Animation in 3D virtual scenes can be carried out using motion and deformation of geometric objects or through the use of time-varying textures. Most previous work on texture synthesis addressed exclusively one or the other way. For some natural phenomena however, both the appearance (encoded in the texture) and the geometry should be dynamic : lava flows, ocean waves, and avalanches belong to this category. These problems are being addressed by Geoffrey Guingo in his PhD thesis started on Oct. 1st, 2015, co-advised by Jean-Michel Dischler and Marie-Paule Cani, and co-supervised by Basile Sauvage.

Texture analysis and synthesis platform

We started to develop a new collaborative platform dedicated to the analysis and synthesis of textures, built upon the ITK library (Insight Segmentation and Registration Toolkit). This platform will help us to capitalize our developments. We are also willing to make the codes accompanying our publications publicly available trough this platform.