Team IGG : Computer Graphics and Geometry

Appearance and Movement

From Team IGG : Computer Graphics and Geometry
Jump to navigation Jump to search

Retour à l'acceuil

Appearance and Movement

Bilan de 2011 à mi-2016 et prospective du thème Apparence et Mouvement

Context (to be moved in page Structure)

The creation of 3D virtual worlds finds applications in various fields, including technical (3D simulation, virtual prototyping, 3D catalogs and archiving, etc.) as well as artistic areas (e.g. creation of 3D content for entertainment or educational purposes). The target size and level of details of virtual worlds have tremendously increased during the past few years, making it possible to produce highly realistic synthetic images. An enhanced visual realism is obtained by the definition of detailed 3D models (shape and movement) in conjunction with sophisticated appearances. Advances in technologies for shape and movement acquisition (3D scanners) have lead to considerable improvements both in terms of quality and amount of produced 3D models. However there are still some major challenges in this field, related to the huge size of the models and the transfer of acquired movements from one model to another one.

The appearance of 3D models is commonly defined by set of "textures" that consist in 2D images mapped onto the surfaces of 3D models, just as wallpaper. Textures enrich 3D surfaces with small scale details: wood grain, bricks for a façade or grass blades. With the increasing definition of display devices, the creation of very high definition textures, i.e. with resolution beyond hundreds of Mega-pixels, has become a necessity. However creating texture of this size with standard per-pixel editing tools is a tedious task for computer graphics artists, which increases production costs. The usage of acquisition techniques is also limited because the conditions required to perform the acquisition of the real materials are very restrictive: controlled lighting, object accessibility, etc.

Objectives / Challenges

Our first challenge is to devise novel methods for the analysis and segmentation of 3D acquisition data, including 3D scans and motion capture data. Our goal is to offer tools that simplify the creation of 3D models, possibly with movement, as well as the construction of statistical atlases. Our second challenge is to develop methods to produce textures representing the materials of real objects from a set of photographs taken with very few constraints. Our goal is to map these textures onto the surfaces of arbitrary 3D models and under arbitrary lighting conditions for realistic image synthesis applications.

Permanent participants

Other participants

  • 5 Doctoral candidates (PhD students): Geoffrey Guingo, doctoral candidate en CDD au 1. oct. 2015 (ERC Marie-Paule CANI et contrat Allegorithmic), Alexandre Ribard, doctorant contractuel au 1. oct. 2015 (Allocation ministère), Vasyl Mykhalchuk du 11/2011 au 04/2015, Guoliang Luo du 10/2011 au 12/2014, Kenneth Vanhoey du 10/2010 au 09/2013 (Thèse soutenue le 18/02/2014).

Publication

[2-SKCC13] Seo H., Kim S., Cordier F., Choi J. and Hong K., "Estimating Dynamic Skin Tension Lines in Vivo using 3D Scans", Computer-Aided Design, (Proc. ACM Symposium on Solid and Physical Modeling 2012), Elsevier.

[2-LCS13] G. Luo, F. Cordier, H. Seo. Compression of 3D Animation Sequences by Temporal Segmentation, Computer Animation and Virtual Worlds, Wiley-Blackwell, Vol. 24(3-4):365--375, May 2013.

[4-LCS14] G. Luo, F. Cordier, H. Seo. Similarity of Deforming Meshes Based on Spatio-temporal Segmentation, dans Eurographics Workshop on 3D Object Retrieval, pp. 77--84, B. Bustos, H. Tabia, J.-P. Vandeborre, and R. Veltkamp (Eds.), Strasbourg, France, March 2014.

[4-LSC14] G. Luo, H. Seo, F. Cordier. Temporal Segmentation of Deforming Meshes, Computer Graphics International, Sydney, Australia, June 2014.

[8-Luog14] G. Luo. Segmentation de Maillages Dynamiques et Son Application pour le Calcul de Similarité, 2014.

[4-LLS15] G. Luo, G. Lei, H. Seo. Joint Entropy based Key-frame Extraction for 3D Animations, Computer Graphics International, Strasbourg, France, June 2015.

[2-LCS16] G. Luo, F. Cordier, H. Seo. Spatio-temporal Segmentation for the Similarity Measurement of Deforming Meshes, Visual Computer, Springer Verlag, Vol. 32(2):243-256, February 2016.

[2-MSC15] V. Mykhalchuk, H. Seo, F. Cordier. On spatio-temporal feature point detection for animated meshes, Visual Computer, Springer Verlag, Vol. 31(11):1471-1486, November 2015.

[2-MCS13] V. Mykhalchuk, F. Cordier, H. Seo. Landmark Transfer with Minimal Graph, Computers & Graphics, Elsevier, Vol. 37(5):539--552, August 2013.

[4-MSC14] V. Mykhalchuk, H. Seo, F. Cordier. AniM-DoG: A Spatio-Temporal Feature Point Detector for Animated Mesh, dans Computer Graphics International, Computer Graphics Society (Eds.), Sydney, Australia, June 2014.

[4-MSC14] V. Mykhalchuk. Feature-based matching of animated meshes, 2015.

[1-Seoh15] H. Seo, Personalized body modeling, Context Aware Human-Robot and Human-Agent Interaction, Magnenat-Thalmann N., Yuan J., Thalmann D., You B.-J. (Eds.), Chapitre. 4, pages 113-132, Springer, November 2015

Results

we have developed a set of new techniques for geometrically processing deforming surface. Our correspondence computation method makes use of the kinematic properties of the deforming surfaces so as to put similar deformation behaviors in correspondence. During the computation we extract a set of spatio-temporal dynamic feature points on each surface, which is also based on the deformation behavior. Also developed is a similarity metric for animated meshes, based on their tempo-spatial segmentations. To the best of our knowledge, our developed techniques are the first who formulate the problems of segmentation, similarity measurement, feature extraction and correspondence computation on time-varying surface data. This has been confirmed by the relevant scientific community who starts paying attention to our recent results. Additionally, the facial data we have captured are expensive and rare, as it requires not only high-cost mocap device and experts but also multiple subjects. We expect that this data will be shared and cited broadly among other researchers.

Perspectives

Building the ground truth for dynamic features and correspondence on deforming meshes is left as open challenges. We are currently building the ground truth data on static meshes, in collaboration with Xlim and LIRIS, by using eye-tracker. Once the ground truth has been successfully made, we can aim higher goals such as validation of computational approaches in feature extraction. We plan to pursue the statistical atlas construction and the revision of the registration using the atlas in the near future.