Team IGG : Computer Graphics and Geometry

Difference between revisions of "Appearance and Movement"

From Team IGG : Computer Graphics and Geometry
Jump to navigation Jump to search
m
Line 44: Line 44:
  
 
===Perspectives===
 
===Perspectives===
 
+
<!--
 
=====Digitization platform=====
 
=====Digitization platform=====
 
We want to continue efforts to develop the [http://icube-igg.unistra.fr/en/index.php/ExRealis ExRealis] digitization platform, as a hardware and software support to our activities, and also propose service delivery.
 
We want to continue efforts to develop the [http://icube-igg.unistra.fr/en/index.php/ExRealis ExRealis] digitization platform, as a hardware and software support to our activities, and also propose service delivery.
Line 72: Line 72:
 
=====Sketch-based modeling=====
 
=====Sketch-based modeling=====
 
The last topic is related to sketch-based modeling. The aim is to develop methods to compute a 3D shape such that its silhouette matches the input sketch. The main interest of this research topic is to enable users who do not any knowledge in 3D modeling to create 3D shapes quickly and easily.
 
The last topic is related to sketch-based modeling. The aim is to develop methods to compute a 3D shape such that its silhouette matches the input sketch. The main interest of this research topic is to enable users who do not any knowledge in 3D modeling to create 3D shapes quickly and easily.
 
+
-->
 
<!--
 
<!--
 
===Publications===
 
===Publications===

Revision as of 12:54, 1 April 2016

Retour à l'acceuil

Appearance and Movement

Bilan de 2011 à mi-2016 et prospective du thème Apparence et Mouvement

Context

The creation of 3D virtual worlds finds applications in various fields, including technical (3D simulation, virtual prototyping, 3D catalogs and archiving, etc.) as well as artistic areas (e.g. creation of 3D content for entertainment or educational purposes). The target size and level of details of virtual worlds have tremendously increased during the past few years, making it possible to produce highly realistic synthetic images. An enhanced visual realism is obtained by the definition of detailed 3D models, including shape and movement, in conjunction with sophisticated appearances. Advances in technologies for shape and movement acquisition based on 3D scanners have lead to considerable improvements both in terms of quality and amount of produced 3D models. However there are still some major challenges to address in this field, related to the huge size of the models and the transfer of acquired movements from one model to another one.

The appearance of 3D models is commonly defined by set of "textures" that consist in 2D images mapped onto the surfaces of 3D models, just as wallpaper. Textures enrich 3D surfaces with small scale details: wood grain, bricks for a façade or grass blades. With the increasing definition of display devices, the creation of very high definition textures, i.e. with resolution beyond hundreds of Mega-pixels, has become a necessity. However creating texture of this size with standard per-pixel editing tools is a tedious task for computer graphics artists, which increases production costs. The usage of acquisition techniques is also limited because the conditions required to perform the acquisition of the real materials are very restrictive: controlled lighting, object accessibility, etc.

Objectives / Challenges

Our first challenge is to devise novel methods for the analysis and segmentation of 3D acquisition data, including 3D scans and motion capture data. Our goal is to offer tools that simplify the creation of 3D models, possibly with movement, as well as the construction of statistical atlases. Our second challenge is to develop methods to produce textures "by example", which means that the input data are texture samples extracted from images. To reach this goal, image analysis tools are needed to classify and extract textures at different scales. We also want to represent the materials of real objects with textures generated from a set of photographs taken with very few constraints. Our goal is to map these textures onto the surfaces of arbitrary 3D models and under arbitrary lighting conditions for realistic image synthesis applications.

Permanent participants

  • One Professor: Jean-Michel Dischler
  • One Research scientist ("Chargée de recherche"): Hyewon Seo
  • Five Associate Professors ("Maîtres de Conférences"): Rémi Allègre, Karim Chibout, Frédéric Cordier, Arash Habibi, Basile Sauvage
  • Three Research engineers: Frédéric Larue (2011-), Olivier Génevaux (2011-2015), Sylvain Thery (2015-)
  • 5 Doctoral candidates (PhD students): Geoffrey Guingo (fixed-term contract starting Oct. 2015 ERC Marie-Paule CANI et contrat Allegorithmic), Guoliang Luo (Project SHARED contract from Oct. 2011 to Dec. 2014 (PhD thesis defended on Nov. 4th, 2014)), Vasyl Mykhalchuk (Project SHARED contract from Nov. 2011 to Apr. 2015 (PhD thesis defended on April 9th, 2015)), Alexandre Ribard (UNISTRA doctoral contract starting Nov. 2015), Kenneth Vanhoey (UNISTRA doctoral contract from Oct. 2010 to Sept. 2013 (PhD thesis defended on Feb. 18th, 2014)).

Results

Digitization platform

We develop the ExRealis digitization platform. ExRealis offers a range of devices and software tools covering the whole digitization processing pipeline, from data acquisition to the creation of textured 3D models, that also proposes some tools for texture reconstruction and visualization accounting for complex lighting environments or material characteristics. Our software tools are based on out-of-core data structures and mechanisms that allow very large data management: millions of sample points and hundreds of photographs for a single object. ExRealis also includes a motion capture system for the animation of avatars in virtual reality or biomechanical applications. Our platform covers our needs in terms of acquisition and data processing and also makes it possible to capitalize developments achieved through our scientific production. Frédéric Larue presented a tutorial on appearance acquisition, including a presentation of the ExRealis platform, at the AC3D days of GDR ISIS in May 2015.

Shape analysis, registration, and segmentation of movement data

With the recent advances in imaging technologies we now have a growing accessibility to capture the shape and motion of people or organs either by using optical motion capture system or using medical imaging scanners. Taking a step beyond the existing methods that use static shape information for shape analysis [2-MCS13], we have developed novel shape analysis methods that allow exploiting motion information [2-SKCC13]. Our developed techniques are the first who formulate the problems of segmentation [4-LSC14, 4-LLS15], feature extraction [4-MSC14, 2-MSC15], similarity computation [4-LCS14, 8-Luog14, 2-LCS16], and correspondence computation [8-Mykh15] on time-varying surface data.

Appearance reconstruction

2D parametric color functions are widely used in image-based rendering and image relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data (photographs) is promising but difficult. Acquiring data in a dense and uniform fashion is not always possible, while a simpler acquisition process results in sparse, scattered and noisy data. We have developed a method for reconstructing radiance functions for digitized objects from photographs [2-VSGL13]. This method makes it possible to visualize digitized objects in their original lighting environments. We have also developed a method for simplifying meshes with attached radiance functions [2-VSKL15]. Our method mekes it possible to produce verycompact models of digitized objects with both geometry and appearance.

Texture modeling and synthesis

Textures are crucial to enhance the visual realism of virtual worlds. In order to alleviate the burden of work for artists who have to design gigantic virtual worlds, we have developed methods to automatically generate high-resolution textures from a single input exemplar, with control over pattern preservation, as well as over content randomness and variety in the resulting texture [2-GDG12, 2-GDG12a, 2-VSLD13, 2-GSVD14]. Among these papers, two in particular [2-VSLD13, 2-GSVD14] have been published in ACM Transactions on Graphics (IF: 3.725), which is the most famous journal in computer graphics. Thanks to these publications, Jean-Michel Dischler was invited to the Dagstuhl Seminar on Real-World Visual Computing (Leibniz-Zentrum für Informatik) in Oct. 2013. Thanks to a collaboration started in Jan. 2015 with Yale University's Computer Graphics Group (Yitzchak Lockerman, PhD candidate and Holly Rushmeier, Professor), a multi-scale texture classification and extraction tool in images has been developed. This collaboration has led to an article [2-LSAD16] to be published in July 2016 in the proceedings of SIGGRAPH 2016, the most renowned in the Computer Graphics field. These proceedings are published as an issue of ACM Transactions on Graphics.

Perspectives