Team IGG : Computer Graphics and Geometry

Difference between revisions of "Gallery"

From Team IGG : Computer Graphics and Geometry
Jump to navigation Jump to search
(Consistency with the new french gallery)
Line 1: Line 1:
 +
= Contraintes et Preuves =
  
 +
<gallery mode="packed-hover" heights="200px">
 +
Image:Desargues2d.png | '''Specifications and proofs'''
 +
Image:Desargues 3D plan.png | '''Specifications and proofs:''' 3D illustration of Desargues' theorem in a 3d projective space.
 +
Image:Jfd_hmap1.png | '''Specifications and proofs'''
 +
Image:Jfd_hmap1Torus.png | '''Specifications and proofs'''
 +
Image:Jfd_hmap2.png | '''Specifications and proofs'''
 +
Image:Jfd_polyhedra1.png | '''Specifications and proofs'''
 +
Image:Jfd_nf.png | '''Specifications and proofs'''
 +
Image:Jfd_seg_subd1seg.png | '''Certification of an image segmentation modelled with colored hypermaps:''' A colored subdivision of a plane and its segmentation.
 +
Image:Jfd_seg_hmap2.png | '''Certification of an image segmentation modelled with colored hypermaps:''' Model of the previous subdivision with a quasi-hypermap (open vertices and edges).
 +
Image:Jfd_seg_hmap3.png | '''Certification of an image segmentation modelled with colored hypermaps:''' Model of the previous subdivision with a quasi-hypermap (closed vertices and edges).
 +
Image:Jfd_seg_hmap2seg.png | '''Certification of an image segmentation modelled with colored hypermaps:''' Segmentation of the quasi-hypermap into 2 phases (imperfections remain).
 +
Image:Jfd_seg_hmap3seg.png | '''Certification of an image segmentation modelled with colored hypermaps:''' Segmentation of the quasi-hypermap into 2 phases (the result is correct).
 +
Image:Desargues.png | '''Desargues' theorem'''
 +
Image:Wernick108RC.png | '''Construction of solutions:''' Geometrical translation of an algebraic solution.
 +
Image:W108Geo.png | '''Construction of solutions:''' Direct construction.
 +
Image:Caro coloration aiguille sanspeau.jpg | '''Planning surgical operations:''' Automated computation of the optimal location of a needle and display of the liver and skin.
 +
Image:Caro 4 deforms.jpg | '''Planning surgical operations:''' Computing the deformation of the injured zone near blood vessels.
 +
Image:Caro phantom small.jpg | '''Planning surgical operations:''' Force feedback for realistic simulation of the surgical gesture in the application of radiofrequency.
 +
Image:Caro zonesoptimisationdec2006 detoure.jpg | '''Planning surgical operations:''' Candidate area for radiofrequency needle insertion with coloring of areas according to soft constraints.
 +
Image:Caro 2 sub.jpg | '''Planning surgical operations:''' Subdivisions of triangles at the border of candidate areas for radiofrequency needle insertion.
 +
Image:Caro 4contraintes-inter.jpg | '''Planning surgical operations:''' Visualization of 4 strict constraints as insertion zones - and their intersection - in the RF application.
 +
Image:Caro fusionCsouples.jpg | '''Planning surgical operations:''' Fusion of 3 soft constraints represented as colorations of candidate areas for radiofrequency needle insertion.
 +
Image:WbRadioFreq06.JPG | '''Planning surgical operations:''' The application of radio frequency on virtual reality station.
 +
Image:Liver_deformation2.png | '''Planning surgical operations:''' Inclusion of biomechanical simulations in the optimization loop.
 +
Image:SnapIPCAI3.png | '''Planning surgical operations:''' Resolution of positioning constraints in deep brain stimulation.
 +
Image:Iceballs.png | '''Planning surgical operations:''' Calculation of thermal propagation in cryoablation.
 +
Image:Pareto2.png | '''Planning surgical operations:''' Possible insertion points on a Pareto front.
 +
Image:Caro SCP.jpg | '''Planning surgical operations'''
 +
Image:Cg_moteur_anim.png | '''Geometrical constructions'''
 +
Image:Cg_plan_tangent.jpg | '''Geometrical constructions'''
 +
Image:Cg_lampe.jpg | '''Geometrical constructions'''
 +
</gallery>
  
= Specifications, Constraints and Proofs in Geometry =
+
= Modélisation et Interaction =
<!--
 
{{Image_Cadre|jfd_hmap1.png|left|230px}}
 
{{Image_Cadre|jfd_hmap1Torus.png|left|230px}}
 
{{Image_Cadre|jfd_hmap2.png|left|230px}}
 
<BR clear="all"/>
 
{{Image_Cadre|jfd_polyhedra1.png|left|320px}}
 
{{Image_Cadre|jfd_nf.png|left|320px}}
 
<BR clear="all"/>
 
<BR/>
 
-->
 
  
== Formalization and theorem proving in geometry ==
+
<gallery mode="packed-hover" heights="200px">
[[Image:Desargues.png|left|thumb|500px|Illustration of Desargues theorem]]
+
Image:pan_2021_multi_scale_space_time_registration_of_growing_plants_00.png | '''Multi-scale Space-time Registration of Growing Plants :''' Result of our framework on one tomato plant (left) and one maize (right): corresponding points share the same colour. Some landmarks have been sampled and are shown connected to their corresponding points. [https://icube-publis.unistra.fr/docs/15504/3dv_paper.pdf [3DV-2021]]
<BR clear="all"/>
+
Image:pan_2021_multi_scale_space_time_registration_of_growing_plants_01.png | '''Multi-scale Space-time Registration of Growing Plants:''' Point-wise matching of the whole plant growing. [https://icube-publis.unistra.fr/docs/15504/3dv_paper.pdf [3DV-2021]]
<BR/>
+
Image:viville_2021_hexahedral_mesh_generation_for_tubular_shapes_using_skeletons_and_connection_surfaces_00.png | '''Tubular shape scaffolding:''' Hexahedral reconstruction of the metatron mesh using the tubular shape scaffolding algorithm. [https://icube-publis.unistra.fr/4-VKB21 [VISIGRAPP-2021]]
 +
Image:viville_2021_hexahedral_mesh_generation_for_tubular_shapes_using_skeletons_and_connection_surfaces_01.png | '''Tubular shape scaffolding:''' Using the red scaffold as a support a hexahedral mesh is produced then optimized and fitted to the input surface. [https://icube-publis.unistra.fr/4-VKB21 [VISIGRAPP-2021]]
 +
Image:viville_2021_hexahedral_mesh_generation_for_tubular_shapes_using_skeletons_and_connection_surfaces_02.png | '''Hex Meshes:''' Construction of hexahedral meshes (yellow) from an input surface (blue) and it's curve skeleton using the scaffold method. [https://icube-publis.unistra.fr/4-VKB21 [VISIGRAPP-2021]]
 +
Image:bobenrieth_2020_descriptive_interactive_3d_shape_modeling_from_a_single_descriptive_sketch.png | '''Descriptive: Interactive 3D Shape Modeling from A Single Descriptive Sketch:''' The constraints specified by the user are shown in red for positional constraints green for corner points and green–red for both. [https://icube-publis.unistra.fr/docs/14544/Descriptive.pdf [CAD-2020]]
 +
Image:yang_2018_analyzing_clothing_layer_deformation_statistics_of_3d_human_motion.png | '''Analyzing Clothing Layer Deformation Statistics of 3D Human Motions:''' Comparison to ground truth. First row: our predicted clothing deformation. Second row: ground truth colored with per-vertex error. Blue: 0cm; red: 10cm. [https://icube-publis.unistra.fr/docs/13307/yang2018analyzing.pdf [ECCV-2018]]
 +
Image:bobenrieth_2018_reconstructiong_flowers_from_sketches_00.png | '''Reconstructing Flowers from Sketches:''' Reconstruction of a flower model from a sketch: input sketch (a); guide strokes provided by the user (b); segmented sketch into petals and other botanical elements (c); reconstructed model (d and e). [https://icube-publis.unistra.fr/docs/13320/Reconstructing%20Flowers%20from%20Sketches.pdf [PG-2018]]
 +
Image:bobenrieth_2018_reconstructiong_flowers_from_sketches_01.png | '''Reconstructing Flowers from Sketches:''' Results obtained with our modeler. [https://icube-publis.unistra.fr/docs/13320/Reconstructing%20Flowers%20from%20Sketches.pdf [PG-2018]]
 +
Image:bobenrieth_2018_reconstructiong_flowers_from_sketches_02.png | '''Reconstructing Flowers from Sketches:''' More flower model examples from our modeler. [https://icube-publis.unistra.fr/docs/13320/Reconstructing%20Flowers%20from%20Sketches.pdf [PG-2018]]
 +
Image:bobenrieth_2017_indoor_scene_reconstruction_from_a_sparse_set_of_3d_shots.png | '''Indoor Scene reconstruction from a sparse set of 3D shots:''' Three shots containing no overlapping area are given as input (top row). The second row shows two different results provided by our method. [https://icube-publis.unistra.fr/docs/12967/IndoorSceneReconstruction.pdf [CGI-2017]]
 +
Image:paulus_2016_handling_topological_changes_during_elastic_registration.png | '''Handling Topological Changes during Elastic Registration:''' Augmented reality on cut and deformed kidney 1 (top) and 2 (bottom) overlaid by the virtual organ the initial registration (left) final registrations: uncut (middle left) cut (middle right) and reference registration (right). [https://hal.inria.fr/hal-01397409/document [IJCARS-2016]]
 +
Image:chen_2016_mesh_sequence_morphing.png | '''Mesh Sequence Morphing:''' A galloping Camel gradually changes into a galloping Dino. The Dino sequence (indicated in blue) is obtained by transferring the deformations of the Camel to the compatibly remeshed Dino in the rest pose. [https://icube-publis.unistra.fr/docs/9060/CGF2016Vol35Number1pp179-190.pdf [CGF-2016]]
 +
Image:lung_mesh.png | '''Animation of bronchi:''' Construction of a hexahedral volume mesh of a bronchial model.
 +
Image:lung_anim.png | '''Animation of bronchi:''' Breathing cycle on a volumetric mesh of bronchi.
 +
Image:OpenStreetMapImport.png | '''Crowd simulation:''' Geographic data imported from open street map.
 +
Image:CrowdOnFloor.png | '''Crowd simulation:''' Buildings composed of floors.
 +
Image:CrowdOnPlanet.png | '''Crowd simulation:''' Crowd on a virtual planet.
 +
Image:CrowdOnKnoth.png | '''Crowd simulation:''' Support for extreme topologies.
 +
Image:CPH_2D.png | '''Adaptive and multi-resolution volume model:''' Labeling of cells to define an implicit hierarchy.
 +
Image:Volume_subdivision.png | '''Adaptive and multi-resolution volume model:''' Volume subdivision of a three-hole torus modeled by a set of polyhedra.
 +
Image:Pierre_cube_catmullclark.jpg | '''Multi-resolution modeling:''' Use of MR-Maps in a multi-resolution editing tool with multi-resolution subdivision surfaces generated with the Catmull-Clark scheme (square mesh).
 +
Image:Pierre_horse_quadtriangle.jpg | '''Multi-resolution modeling:''' Use of MR-Maps in a multi-resolution editing tool with multi-resolution subdivision surfaces generated with the Quad - Triangle scheme (hybrid triangle - square mesh).
 +
Image:Pierre_bunny_sqrt3.jpg | '''Multi-resolution modeling:''' Use of MR-Maps in a multiresolution editing tool with multiresolution subdivision surfaces generated with the Sqrt(3) scheme (Kobbelt scheme).
 +
Image:Dobrina_lapins.jpg | '''Mesh reconstruction from voxel images:''' Three key steps of the Discrete Delaunay reconstruction algorithm a) Discrete approximation of Voronoi regions on the discrete boundary b) Discrete Voronoi graph c) Reconstruction by duality: euclidean Voronoi graph (in black).
 +
Image:Dobrina_dragons.jpg | '''Mesh reconstruction from voxel images:''' Surface reconstruction of the famous dragon object with three different resolutions.
 +
Image:Dobrina_simultane.jpg | '''Mesh reconstruction from voxel images:''' Simultaneous reconstruction of the liver and the right kidney. The common border is contained in both meshes.
 +
Image:Dobrina_bones-aorte.jpg | '''Mesh reconstruction from voxel images:''' Example of a skeleton and aorta reconstruction.
 +
Image:Schwartz1.jpg | '''Detection and characterization of pockets in proteins:''' A transparent polygon highlights the main pocket of the ligand binding domain of the vitamin D receptor. A ligand modeled by a union of spheres can be distinguished.
 +
Image:Schwartz3.jpg | '''Detection and characterization of pockets in proteins:''' Same protein and same pocket without transparency this time. The green facets represent a blocked passage between two neighboring pockets. This information is important for the biologist.
 +
Image:Vaiss_0.png | '''Modeling of vessels'''
 +
Image:Vaiss_1.jpg | '''Modeling of vessels:''' Reconstructed liver vascular network (topology).
 +
Image:Vaiss_2.jpg | '''Modeling of vessels:''' Reconstructed vascular network (topology).
 +
Image:FilArianne.jpg | '''Navigation in vessels:''' Path in a vessel.
 +
Image:BV-Rainbow.jpg | '''Navigation in vessels:''' Vessel network.
 +
Image:Capture-NavigWIM.jpg | '''Navigation in vessels:''' Help with navigation.
 +
Image:Visu-In-Uni.jpg | '''Navigation in vessels:''' Visualization of the interior of the vessels.
 +
Image:Multifil_fleur.jpg | '''Topology and modeling (Multifil)'''
 +
Image:Multifil_lotus7.jpg | '''Topology and modeling (Multifil)'''
 +
Image:Multifil_telephone.jpg | '''Topology and modeling (Multifil)'''
 +
Image:Multifil_cuisine.jpg | '''Topology and modeling (Multifil)'''
 +
Image:Dogme corne.jpg | '''Deformation models:''' Dogme.
 +
Image:Dogmerv.jpg | '''Deformation models:''' Dogme in VR.
 +
Image:Dinosaure.png | '''Deformation models:''' CFFD.
 +
Image:ISMAR2015_teaser.png | '''Simulation of cuts and tears in real time:''' Augmented view of an elastic object supporting strong deformations and topological changes.
 +
Image:Liver-AR-01.png | '''Simulation of cuts and tears in real time:''' Augmented view of an operating sequence in a medical environment.
 +
Image:ARinOR.png | '''Simulation of cuts and tears in real time:''' Augmented view of an operating sequence in a medical environment.
 +
Image:Img_featurepoints.png | '''Shape analysis - registration - segmentation of dynamic data:''' Feature point extraction using our AniM-DoG technique on different animated mesh poses. The color represents the temporal scale of the feature points; the radius indicates the spatial scale.
 +
Image:Img_spatialmatching.png | '''Shape analysis - registration - segmentation of dynamic data:''' For a pair of animated meshes with semantically similar movements we compute a set of minutiae on each mesh and their spatial correspondences.
 +
Image:scans_ensas_out.png | '''Medium-scale building acquisition:''' The point cloud of the digitization of a part of the ENSAS is colored by intensity and by a photographic color correction.
 +
Image:scans_ensas_in.png | '''Medium-scale building acquisition:''' The point cloud of the digitization of a part of the ENSAS is visualized from three different viewpoints.
 +
Image:BunnyRender.jpg | '''Small scale object acquisition by hand-held scanner:''' A Stanford bunny was 3D printed (a few cm) and then acquired by handheld structured light scanner. Here we see the geometric mesh and textured dressings.
 +
Image:scans_coques_minces.jpg | '''Acquisition of the fine geometry of transparent objects:''' Talc was deposited on thin transparent shells before scanning them for evaluation of physical deformation models.
 +
Image:Fred_lfacquisition01.jpg | '''Geometry + color acquisition and visualization:''' Left: photograph of the original object. Middle: the geometry is reconstructed from 5 acquisitions. Right: rendering of a synthetic copy.
 +
Image:Fred_lfacquisition02.jpg | '''Geometry + color acquisition and visualization:''' Left: photograph of the original object. Right: two views synthesized from its digital copy.
 +
Image:Fournier1.jpg | '''Reconstruction of scanned objects:''' Adaptation of the Marching Cube and new Marching Square triangulation for the vector distance transform.
 +
Image:Fournier2.jpg | '''Reconstruction of scanned objects:''' Mesh fusion in the vector distance transform domain.
 +
Image:Fournier3.jpg | '''Reconstruction of scanned objects:''' Adaptive mesh filtering in the vector distance transform domain.
 +
Image:FortBLAOverview1.png | '''Scanning of buildings:''' Fort Bois l'Abbé (terrestrial LiDAR).
 +
Image:FortBLAOverview2.png | '''Scanning of buildings:''' Fort Bois l'Abbé (terrestrial LiDAR).
 +
Image:FortBLAOverview3.png | '''Scanning of buildings:''' Fort Bois l'Abbé (terrestrial LiDAR).
 +
Image:FortBLACloseup1.png | '''Scanning of buildings:''' Fort Bois l'Abbé (terrestrial LiDAR).
 +
Image:9N5A1847-small.jpg | '''Scanning device:''' Terrestrial LiDAR.
 +
Image:9N5A1852-small.jpg | '''Scanning device:''' Terrestrial LiDAR.
 +
Image:boustila_2020_interactions_with_a_hybrid_map_for_navigation_information_visualization_in_virtual_reality.png | '''Interactions with a Hybrid Map for Navigation in Virtual Reality:''' (A B) interactions using the Oculus hand Controller (C D) interactions using the smartphone. The red circle represents the fingertip position in the VE. [https://icube-publis.unistra.fr/docs/14832/ISS_Demo_2020.pdf [ACM-ISS-2020]]
 +
Image:casarin_2018_umid3D_a_unity_toolbox_to_support_cscw_system_properties_in_generic_3d_user_interfaces.png | '''UMI3D: A Unity3D Toolbox to support CSCW Systems Properties in Generic 3D User Interfaces:''' UMI3D case study - social interactions. User 2 (on the left) waits for the character controlled by User 1 (on the right) to cross the road before continuing to drive the red car. [https://icube-publis.unistra.fr/docs/14985/cscw029-casarinA.pdf [ACM-HCI-2018]]
 +
Image:casarin_2017_a_unified_model_for_interaction_in_3d_environment.png | '''A Unified Model for Interaction in 3D Environment:''' A new model for designing VR AR and MR applications independently of any device. [https://icube-publis.unistra.fr/docs/12733/A_Unified_Model_for_Interaction_in_3D_Environment_VRST17.pdf [ACM-VRST-2017]]
 +
Image:boustila_2016_a_hybrid_projection_to_widen_the_vertical_field_of_view_with_large_screens.png | '''A Hybrid Projection to Widen the Vertical Field of View with Large Screens:''' Left image represents a view with a perspective projection. Right image shows and example of the hybrid projection. In left image the user cannot see the chair behind him. [https://icube-publis.unistra.fr/docs/9976/3DUI2016.pdf [3DUI-2016]]
 +
Image:SpidarWB.jpg | '''Synthetic reality equipment:''' Spidar and workbench.
 +
Image:Materiel_workbench.jpg | '''Synthetic reality equipment:''' Gloves and wand.
 +
Image:IncaHardware.jpg | '''Synthetic reality equipment:''' INCA.
 +
Image:IncaHandle.jpg | '''Synthetic reality equipment:''' Haptic device effector.
 +
Image:IncaTracking.jpg | '''Synthetic reality equipment:''' Camera for positional tracking.
 +
Image:Rv_terrain.png | '''Terrain edition'''
 +
Image:CCubeTerrainEdit.jpg | '''Terrain edition:''' CCube Menu.
 +
Image:9N5A1823-small.jpg | '''Terrain edition'''
 +
Image:9N5A1819-small.jpg | '''Terrain edition'''
 +
Image:Geolo_pilote.jpg | '''Geological applications'''
 +
Image:Geolo_select.jpg | '''Geological applications'''
 +
Image:Geolo_wb.jpg | '''Geological applications'''
 +
Image:WbMultiRes2.jpg | '''Multi-resolution editing in Virtual Reality'''
 +
Image:WbMultiRes3.jpg | '''Multi-resolution editing in Virtual Reality'''
 +
Image:9N5A1803-small.jpg | '''Multi-resolution editing in Virtual Reality'''
 +
Image:9N5A1805-small.jpg | '''Multi-resolution editing in Virtual Reality'''
 +
Image:9N5A1773-small.jpg | '''Visualization of digitized object in immersive environment'''
 +
Image:9N5A1770-small.jpg | '''Visualization of digitized object in immersive environment'''
 +
Image:9N5A1811-small.jpg | '''Visualization of digitized object in immersive environment'''
 +
Image:9N5A1812-small.jpg | '''Visualization of digitized object in immersive environment'''
 +
Image:Principle.png | '''Distance perception factors in virtual environments:''' The principle of hybrid projection and the different parameters used to render the scene.
 +
Image:Projection_example_1.png | '''Distance perception factors in virtual environments:''' Left: a view of the scene projected with the ordinary perspective projection. Right: an example of the same view with the hybrid projection. On the left image the user does not see the legs of the chair in front of him.
 +
Image:Virtual_visit.png | '''Distance perception factors in virtual environments:''' Example of a virtual visit: different positions during the visit of a furnished house. The navigation path is represented as a breadcrumb trail in green.
 +
Image:DLL_splitting.png | '''Separation of degrees of freedom for object manipulation:''' Study of the impact of separating degrees of freedom for interaction in immersive environments.
 +
</gallery>
  
==Specification of a segmentation operation on 2D image represented by colored hypermaps [[IMAGES_GALERIE#Certification d'une opération de segmentation d'images 2D modélisées par des hypercartes colorées| (Détail)]]==
+
= Apparence et Mouvement =
{{Image_Cadre|jfd_seg_subd1seg.png|left|x150px}}
 
{{Image_Cadre|jfd_seg_hmap2.png|left|x150px}}
 
{{Image_Cadre|jfd_seg_hmap3.png|left|x150px}}
 
<BR clear="all"/>
 
{{Image_Cadre|jfd_seg_hmap2seg.png|left|x150px}}
 
{{Image_Cadre|jfd_seg_hmap3seg.png|left|x150px}}
 
<BR clear="all"/>
 
<BR/>
 
  
== Specification and resolution of geometric constraints ==
+
<gallery mode="packed-hover" heights="200px">
{|
+
Image:9N5A1844-small.jpg | '''Scanning equipment:''' Motion capture.
|valign="top"|[[Image:wernick108RC.png|left|thumb|x250px|Geometric translation of a solution found algebraicaly]]
+
Image:9N5A1853-small.jpg | '''Scanning equipment:''' Motion capture.  
|valign="top"|[[Image:W108Geo.png|left|thumb|x250px|Direct construction]]
+
Image:9N5A1859-small.jpg | '''Scanning equipment:''' Motion capture.
|}
+
Image:Chermain2021ImportanceSampling.png | '''Importance Sampling of Glittering BSDFs based on Finite Mixture Distributions:''' A glittering coloured glass sphere with a spatially varying microfacet density. Left: our sampling scheme. Centre: reference. Right: Gaussian mono-lobe approximation is used for sampling. [http://igg.unistra.fr/People/chermain/importance_sampling_glint/ [EGSR-2021]]
{{Image_Cadre|Cg_moteur_anim.png|left|x150px}}
+
Image:kim_2021_edge_based_procedural_textures.png | '''Edge-based procedural textures:''' Edges of a texture are extracted and encoded into an edge-based procedural texture (EBPT). New textures are generated either automatically or by controlling the EBPT generation by the user. [https://www.semanticscholar.org/paper/Edge-based-procedural-textures-Kim-Dischler/55aebfb8e9b1851de1f5e586fd64c5c869b346c8 [VC-2021]]
{{Image_Cadre|Cg_plan_tangent.jpg|left|x150px}}
+
Image:lutz_2021_cyclostationary_gaussian_noise_theory_and_synthesis.png | '''Cyclostationary Gaussian noise: theory and synthesis:''' We convey existing stationary noises to a cyclostationary context enabling the synthesis of cyclostationary textures controlled by spectra (left) and by an exemplar (right). [https://hal.archives-ouvertes.fr/hal-03181139/document [EG-2021]]
{{Image_Cadre|Cg_lampe.jpg|left|x150px}}
+
Image:Chermain2020ProceduralTeaser.png | '''Procedural Physically based BRDF for Real-Time Rendering of Glints:''' Left: sparkling fabrics are rendered (3.0 ms/frame). Right: plane with specular microfacet density increasing. Top: our physically based BRDF (2.5 ms/frame). Bottom: not physically based [ZK16] (1.3 ms/frame). [http://igg.unistra.fr/People/chermain/real_time_glint/ [PG-2020]]
<BR clear="all"/>
+
Image:chermain2020GeometricGlint.png | '''Real-Time Geometric Glint Anti-Aliasing with Normal Map Filtering:''' Arctic landscape with a normal mapped surface. (a) glinty BRDF of Chermain et al. [2020] prone to geometric glint aliasing. Our geometric glint anti-aliasing (GGAA) without and with normal map filtering (b and c). (d) Reference. [http://igg.unistra.fr/People/chermain/glint_anti_aliasing/ [i3D-2020]]
<BR/>
+
Image:guehl_2020_semi_procedural_textures_using_point_process_texture_basis_functions.png | '''Semi-Procedural Textures Using Point Process Texture Basis Functions:''' (a) Input texture map(s) and a binary structure (b) are used to generate a semi-procedural output (d). A rendered view of the input material is shown for comparison (c). [https://icube-publis.unistra.fr/docs/14547/semiproctex_low.pdf [EGSR-2020]]
 
+
Image:paris_2020_modeling_rocky_scenery_using_implicit_blocks.png | '''Modeling Rocky Scenery using Implicit Blocks:''' Different styles of blocks generated on a cliff and arches. From left to right tabular block style equidimensional blocks and finally rhombohedral block style. [https://hal.archives-ouvertes.fr/hal-02926218/document [VC-2020]]
== Formalization and planning of surgical interventions [[IMAGES_GALERIE#Planification d'opérations chirurgicales (Caroline Essert & Claire Baegert)| (Détail)]]==
+
Image:guingo_2020_content_aware_texture_deformation_with_dynamic_control.png | '''Content-aware texture deformation with dynamic control:''' Our deformation model allows to mimic non-uniform physical behaviors at texel resolution. Top: the parameterization is advected in a static flow field. Bottom: the deformation can be controlled dynamically. [https://icube-publis.unistra.fr/docs/14601/GLSLDC20.pdf [C&G-2020]]
{|
+
Image:lutz_2019_anisotropic_filtering_for_on_the_fly_patch_based_texturing.png | '''Anisotropic Filtering for On-the-fly Patch-based Texturing:''' Our filtering method (right) is compared to the ground truth (middle) and no filtering (left). The ground truth is computed by an exact filtering of the high resolution. The leftmost view indicates the MIP-map levels used. [https://seafile.unistra.fr/f/3ae0674a231c42818286/?dl=1 [EG-2019]]
|valign="top"|[[Image:liver_deformation2.png|left|thumb|x200px|Inclusion of biomechanical simulations in the optimization loop [[http://icube-publis.unistra.fr/2-HPCE16 2-HPCE16]].]]
+
Image:guingo_2017_bi_layer_textures.png | '''Bi-Layer textures:''' Our noise model decomposes an input exemplar as a structure layer and a noise layer. Large outputs are synthesized on-the-fly by synchronized synthesis of the layers. Variety can be achieved at the synthesis stage by deforming the structure layer while preserving fine scale appearance encoded in the noise layer. [https://hal.archives-ouvertes.fr/hal-01528537/document [EGSR-2017]]
|valign="top"|[[Image:SnapIPCAI3.png|left|thumb|x200px|Constraint solving for deep brain stimulation electrode placement [[http://icube-publis.unistra.fr/2-EHLA12 2-EHLA12]].]]
+
Image:lockerman_2016_multi_scale_label_map_extraction_for_texture_synthesis.png | '''Multi-Scale Label-Map Extraction for Texture Synthesis:''' The input non-stationary texture (a). Hierarchy of labeled clusters: coarse scale (b) includes finer scales (c). Interactive texture editing (d) and content selection for creating new non-stationary textures (e). [https://graphics.cs.yale.edu/sites/default/files/multi-scale_label-map_extraction_for_texture_synthesis_low_rez.pdf [SIGGRAPH-2016]]
|valign="top"|[[Image:iceballs.png|left|thumb|x200px|Thermal propagation computation for cryoablation [[http://icube-publis.unistra.fr/2-JE15 2-JE15]].]]
+
Image:LSADDR16_Labeling_results.png | '''Modélisation et synthèse de textures:''' Des cartes de labels multi-échelles sont obtenues à l'aide de notre méthode d'analyse de textures. Une application possible est l'édition interactive de textures. [https://graphics.cs.yale.edu/sites/default/files/multi-scale_label-map_extraction_for_texture_synthesis_low_rez.pdf [SIGGRAPH-2016]]
|valign="top"|[[Image:pareto2.png|left|thumb|x200px|Possible insertion points on a Pareto front [[http://icube-publis.unistra.fr/4-HVCJ16 4-HVCJ16]].]]
+
Image:pavie_2016_volumetric_spot_noise_for_procedural_3d_shell_texture_synthesis_01.png | '''Volumetric Spot Noise for Procedural 3D Shell Texture Synthesis:''' Left: uniform density. Middle: user controls density. Right: user controls orientation. In all cases control maps can be painted interactively. [https://hal.archives-ouvertes.fr/hal-02413269/document [CGVC-2016]]
|}
+
Image:pavie_2016_volumetric_spot_noise_for_procedural_3d_shell_texture_synthesis_02.png | '''Volumetric Spot Noise for Procedural 3D Shell Texture Synthesis:''' Bunny1 with the "ring" kernel profile (a) and a semi regular distribution profile (b). Bunny2 with a Gaussian kernel profile (c) a random distribution profile (d) and a density map (e). Dragon and plan: use a color map (f) a density map (g) a kernel (h) and a random distribution (i). [https://hal.archives-ouvertes.fr/hal-02413269/document [CGVC-2016]]
{{Image_Cadre|Caro_coloration_aiguille_sanspeau.jpg|left|x200px}}
+
Image:pavie_2016_procedural_texture_synthesis_by_locally_controlled_spot_noise.png | '''Procedural Texture Synthesis by Locally Controlled Spot Noise:''' Examples of a near-regular features reproduction by a single spot noise. Left: kernel profiles and distribution profiles. Right: blue fabric pattern applied on a 3D model. [file:///C:/Users/joris_r/AppData/Local/Temp/Pavie.pdf [WSCG-2016]]
<!--{{Image_Cadre|Caro_4_deforms.jpg|left|230px}}-->
+
Image:glint_cover_competition.png | '''Glittering rendering:''' A glittering copper sphere with spatially varying glitter density illuminated by an environment map and a point light. See [EGSR-2021].  
{{Image_Cadre|Caro_zonesoptimisationdec2006_detoure.jpg|left|x200px}}
+
Image:schrek2021.png | '''Cyclostationary synthesis of textures'''
<!--{{Image_Cadre|Caro_2_sub.jpg|left|300px}}-->
+
Image:Head_our.png | '''Volume visualization with illumination'''
{{Image_Cadre|Caro_4contraintes-inter.jpg|left|x200px}}
+
Image:Ctknee_our.png | '''Volume visualization with illumination'''
{{Image_Cadre|Caro_fusionCsouples.jpg|left|x200px}}
+
Image:Ambientocclusion.png | '''Volume visualization with illumination:''' Ambiant occlusion.  
{{Image_Cadre|Caro_SCP.jpg|left|x200px}}
+
Image:Sponge_teapot.png | '''Textures:''' Volumetric texture.
<BR clear="all"/>
+
Image:TreeF4.png | '''Textures:''' MegaTexel texture.
{{Image_Cadre|WbRadioFreq06.JPG|left|x200px}}
+
Image:SceneFinaleRendu.png | '''Textures:''' Complete scene.
{{Image_Cadre|9N5A1811-small.jpg|left|x200px}}
+
Image:Lucas_volumiqueRefraction1.jpg | '''GPU-accelerated volume visualization:''' Transparent object rendering via an algorithm derived from relief mapping, similar to a ray tracing on a height field. The object is simply represented by a combination of height maps.
{{Image_Cadre|9N5A1812-small.jpg|left|x200px}}
+
Image:Lucas_scanHF_1.jpg | '''GPU-accelerated volume visualization:''' Rendering of a scanned object via a relief mapping algorithm. The aim here is to visualize the object by directly exploiting the data coming from the 3D scanner without using any. Top: rendering obtained. Bottom: map of heights and normals.
{{Image_Cadre|Caro_phantom_small.jpg|left|x200px}}
+
Image:Lucas_scanHFmultipleBox_1.jpg | '''GPU-accelerated volume visualization:''' Rendering via a relief mapping algorithm of a complete scanned object: no complex mesh is used. Only a cube is used as a support for the rendering.
<BR clear="all"/>
+
Image:Rendu_foie_texture.jpg | '''Visualisation:''' Rendering of a textured liver.
<BR/>
+
Image:Rendu_textures_vb.jpg | '''Visualisation'''
 
+
Image:Glass.png | '''Visualisation:''' Transparent material rendering.
 
+
Image:Rendu_fluide.jpg | '''Visualisation:''' Rendering of fluids.
= Geometric Modeling, Simulation and Interaction =
+
Image:Rendu_bijou.jpg | '''Visualisation:''' Rendering of jewelry.
 
+
Image:VSLD13teaser.png | '''Texture modelling and synthesis:''' Textures are synthesized on the fly on the GPU from sample examples at several scales.
== Adaptative and multiresolution volumic model ==
+
Image:GSVDG14teaser.png | '''Texture modelling and synthesis:''' Textures are synthesized on the fly on the GPU using spectral analysis.  
[[Image:volume_subdivision.png|left|thumb|x160px|Volume subdivision of a torus with genus 3, modeled as a set of general polyhedra.]]
+
</gallery>
[[Image:CPH_2D.png|right|thumb|220px|Automatic cell labeling to implicitly define a mesh hierarchy.]]
 
{{Image_Cadre|pierre_cube_catmullclark.jpg|left|x200px}}
 
{{Image_Cadre|pierre_horse_quadtriangle.jpg|left|x200px}}
 
{{Image_Cadre|pierre_bunny_sqrt3.jpg|left|x200px}}
 
<BR clear="all"/>
 
<BR/>
 
 
 
== Detecting collisions in dynamic environments ==
 
{|
 
|valign="top"|[[Image:CASA2014_cercle.png|left|thumb|x150px|Crowd simulation - Agents moving through mobile objects (red)]]
 
|valign="top"|[[Image:openStreetMapImport.png|left|thumb|x150px|Crowd simulation - geographic data imported from OpenStreetMap]]
 
|valign="top"|[[Image:crowdOnFloor.png|left|thumb|x150px|Crowd simulation - Building made of multiple levels]]
 
|valign="top"|[[Image:CrowdOnPlanet.png|left|thumb|x150px|Crowd simulation - Virtual planet]]
 
|valign="top"|[[Image:crowdOnKnoth.png|left|thumb|x150px|Crowd simulation - Extreme topologies supported!]]
 
|}
 
<BR/>
 
 
 
==Virtual reality devices==
 
{|
 
|valign="top"|[[Image:SpidarWB.jpg|left|thumb|x160px|Spidar & workbench]]
 
|valign="top"|[[Image:Materiel_workbench.jpg|left|thumb|x160px|Gloves & wand]]
 
|valign="top"|[[Image:incaHardware.JPG|left|thumb|x160px|INCA]]
 
|valign="top"|[[Image:incaHandle.JPG|left|thumb|x160px|Haptic device's effector]]
 
|valign="top"|[[Image:incaTracking.JPG|left|thumb|x160px|Camera for position tracking]]
 
|}
 
<BR/>
 
 
 
== Real time simulation of cuts and tearing ==
 
[[Image:ISMAR2015_teaser.png|left|thumb|800px|An augmented elastic object undergoing large deformations and topological changes.]]
 
[[Image:Liver-AR-01.png|left|thumb|800px|Augmented surgery sequence in medical environment.]]
 
[[Image:ARinOR.png|left|thumb|403px|Augmented surgery sequence in medical environment.]]
 
<BR clear="all"/>
 
<BR/>
 
 
 
== Separation of degrees of freedom for manipulating objects ==
 
[[Image:DLL_splitting.png|left|thumb|800px|Study on the impact of degrees of freedom splitting for interaction in immersive environment.]]
 
<BR clear="all"/>
 
<BR/>
 
<!-- == Aides à la sélection de cibles en environnement immersif == -->
 
 
 
== Distances perception factor in virtual environments ==
 
{|
 
|valign="top"|[[Image:Principle.png|left|thumb|538px|Principle of hybrid projection and the various parameters used for the scene rendering.]]
 
|valign="top"|[[Image:Projection_example_1.png|left|thumb|615px|The left picture shows the scene rendered with the standard perspective projection. The right picture shows an example from the same viewpoint with the hybrid projection. On the left picture, the user cannot see the chair legs in front of him.]]
 
|}
 
[[Image:Virtual_visit.png|left|thumb|800px|Example of virtual visits: different location during the visit of a furnished house. Navigation path is represented by the green "breadcrumb".]]
 
<BR clear="all"/>
 
<BR/>
 
 
 
 
 
= Appearance and Movement =
 
 
 
== Motion capture devices ==
 
{{Image_Cadre|9N5A1838-small.jpg|left|230px}}
 
{{Image_Cadre|9N5A1844-small.jpg|left|230px}}
 
{{Image_Cadre|9N5A1853-small.jpg|left|230px}}
 
{{Image_Cadre|9N5A1857-small.jpg|left|230px}}
 
{{Image_Cadre|9N5A1859-small.jpg|left|102px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
== Shape analysis, registration, and segmentation of movement data ==
 
{|
 
|valign="top"|[[Image:img_featurepoints.png|left|thumb|x250px|[2-MSC15] Dynamic feature points detected by our AniM-DoG framework are illustrated on a number of selected frames of animated meshes. The color of a sphere represents the temporal scale (from blue to red) of the feature point, and the radius of the sphere indicates the spatial scale.]]
 
|valign="top"|[[Image:img_spatialmatching.png|left|thumb|x250px|Given a pair of animated meshes exhibiting semantically similar motion, we compute a sparse set of feature points on each mesh and compute spatial correspondences among them so that points with similar motion behavior are put in correspondence.]]
 
|}
 
<BR>
 
 
 
== 3D scanning and appearence digitization devices ==
 
{|
 
|valign="top"|[[Image:ScannerLumStruct.jpg|left|thumb|300px|Short range structured light scanner]]
 
|valign="top"|[[Image:laser_leica.png|left|thumb|95px|Mid range time of flight laser scanner]]
 
|valign="top"|[[Image:PhotoHardware.jpg|left|thumb|270px|Photography gears]]
 
|}
 
<BR>
 
 
 
== 3D Reconstruction of digitized objects [[IMAGES_GALERIE#Reconstruction d'objets numérisés par des scanners (Marc fournier)| (Détail) ]] ==
 
[[Image:PipelineGeo.png|left|thumb|900px|Pipeline for reconstructing 3D objects from digitization data]]
 
<BR clear="all"/>
 
{{Image_Cadre|Fournier1.jpg|left|x200px}}
 
{{Image_Cadre|Fournier2.jpg|left|x200px}}
 
{{Image_Cadre|Fournier3.jpg|left|x200px}}
 
<BR clear="all"/>
 
{{Image_Cadre|NUM_Venus.png|left|x200px}}
 
{{Image_Cadre|aphro_render.png|left|x200px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
== Appearance reconstruction ==
 
[[Image:PipelineAppearance.png|left|thumb|640px|Pipeline for the texturing of digitized 3D models from a set of pictures]]
 
{|
 
|valign="top"|[[Image:OND_Ourson.png|left|thumb|600px|Bear statue, cathedral of Strasbourg. From left: picture, 3D model reconstructed after geometry acquisition (35M points), textured 3D model reconstructed after appearance acquisition (55 pictures).]]
 
|valign="top"|[[Image:OND_Taureau.png|left|thumb|690px|Bull statue, cathedral of Strasbourg. From left: picture, 3D model reconstructed after geometry acquisition (22M points), textured 3D model reconstructed after appearance acquisition (28 pictures).]]
 
|}
 
{|
 
|valign="top"|[[Image:MISHA_Masks.png|left|thumb|620px|Digital copies of three Dogon masks. From left: adone mask (antelope), kanaga mask, bird mask with a dege at the top of it.]]
 
|valign="top"|[[Image:MISHA_Hogon.png|left|thumb|387px|Digital copy of a Hogon cup. Left-hand side: the cup fully assembled; right-hand side: each single piece presented separately.]]
 
|valign="top"|[[Image:MISHA_Dege.png|left|thumb|255px|Digital copies of two dege (female figurines).]]
 
|}
 
{|
 
|valign="top"|[[Image:Elephant2.png|left|thumb|502px|Comparison between colour texture and light field. Reflections improve realism and offer a better understanding on the nature of the materials the object is made of.]]
 
|valign="top"|[[Image:NUM_Dragon.png|left|thumb|293px|Dragon. Geometry: 18.2M triangles. Texture: light field.]]
 
|valign="top"|[[Image:NUM_Mask1.png|left|thumb|194px|Mask1. Geometry: 7.7M triangles. Texture: light field.]]
 
|valign="top"|[[Image:NUM_Mask2.png|left|thumb|193px|Mask2. Geometry: 8.7M triangles. Texture: colour.]]
 
|}
 
{|
 
|valign="top"|[[Image:VSGLD13teaser.png|left|thumb|x250px|[2-VSGL13] Starting from a set of photos obtained from hand-held shooting, a virtual representation of the appearance of an object is reconstructed. This appearance especially encodes specular effects.]]
 
|valign="top"|[[Image:VSKLD15teaser.png|left|thumb|x250px|[2-VSKL15] 3D virtual objects with their appearances are simplified: the goal is to reduce their size while minimizing the loss of visual quality.]]
 
|}
 
<BR>
 
 
 
== Texture modeling and synthesis ==
 
{|
 
|valign="top"|[[Image:VSLD13teaser.png|left|thumb|220px|[2-VSLD13] Some textures are synthesized on-the-fly on the GPU from texture samples at multiple scales.]]
 
|valign="top"|[[Image:GSVDG14teaser.png|left|thumb|322px|[2-GSVD14] Some textures are synthesized on-the-fly on the GPU, based on a spectral analysis.]]
 
|valign="top"|[[Image:LSADDR16_Labeling_results.png|left|thumb|273px|[2-LSAD16] Some multi-scale label-maps are obtained with our texture analysis method. A possible application is interactive texture editing.]]
 
|}
 
{|
 
|valign="top"|[[Image:Sponge_teapot.png|left|thumb|x200px|Volumetric texture]]
 
|valign="top"|[[Image:TreeF4.png|left|thumb|x200px|MegaTexel texture]]
 
|valign="top"|[[Image:SceneFinaleRendu.png|left|thumb|x200px|Scene]]
 
|}
 
<BR>
 
 
 
 
 
= Précédents travaux =
 
 
 
==Visualization==
 
{|
 
|valign="top"|[[Image:Head_our.png|left|thumb|x200px|Volumetric visualization]]
 
|valign="top"|[[Image:Ctknee_our.png|left|thumb|x200px|Volumetric visualization]]
 
|valign="top"|[[Image:Ambientocclusion.png|left|thumb|x200px|Ambient occlusion]]
 
<!--
 
<BR clear="all"/>
 
<BR>
 
==Visualisation volumique accélérée par GPU [[IMAGES_GALERIE#Visualisation volumique accéléré par GPU (Lucas Ammann)| (Détail) ]]==
 
{{Image_Cadre|lucas_volumiqueRefraction1.jpg|left|230px}}
 
{{Image_Cadre|lucas_scanHF_1.jpg|left|230px}}
 
{{Image_Cadre|lucas_scanHFmultipleBox_1.jpg|left|230px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
==Visualisation==
 
{{Image_Cadre|Rendu_foie_texture.jpg|left|x150px}}
 
{{Image_Cadre|Rendu_textures_vb.jpg|left|230px}}
 
{{Image_Cadre|Rendu_bijou.jpg|left|230px}} -->
 
|valign="top"|[[Image:Rendu_fluide.jpg|left|thumb|x200px|Fluid simulation]]
 
|valign="top"|[[Image:Glass.png|left|thumb|x200px|Real-time rendering of refractive objects]]
 
|}
 
<BR>
 
 
 
==Detection and caracterization of cavities in proteins [[IMAGES_GALERIE#Détection et caractérisation des poches dans les protéines (Benjamin Schwarz)| (Détail)]]==
 
{{Image_Cadre|Schwartz1.jpg|left|x250px}}
 
{{Image_Cadre|Schwartz3.jpg|left|x250px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
==Modeling of blood vessels for navigation==
 
{{Image_Cadre|Vaiss_2.jpg|left|x200px}}
 
{{Image_Cadre|Vaiss_0.png|left|x200px}}
 
{{Image_Cadre|Vaiss_1.jpg|left|x200px}}
 
{{Image_Cadre|BV-Rainbow.jpg|left|x200px}}
 
{{Image_Cadre|FilArianne.jpg|left|x200px}}
 
<!-- <BR clear="all"/>
 
<BR>
 
{{Image_Cadre|Capture-NavigWIM.jpg|left|175px}}
 
{{Image_Cadre|Visu-In-Uni.jpg|left|175px}} -->
 
<BR clear="all"/>
 
<BR>
 
 
 
==Deformation models==
 
{|
 
|valign="top"|[[Image:Dogme_corne.jpg|left|thumb|x200px|Dogme]]
 
|valign="top"|[[Image:Dogmerv.jpg|left|thumb|x200px|Dogme in VR]]
 
|valign="top"|[[Image:Dinosaure.png|left|thumb|x200px|CFFD]]
 
|}
 
<BR>
 
 
 
==Meshes reconstruction from voxel images [[IMAGES_GALERIE#Reconstruction de maillages à partir d'images voxels (Dobrina Boltcheva)| (Détail) ]]==
 
{|
 
|valign="top"|[[Image:dobrina_lapins.jpg|left|thumb|x150px|Bunny]]
 
|valign="top"|[[Image:dobrina_dragons.jpg|left|thumb|x150px|Dragon]]
 
|valign="top"|[[Image:dobrina_simultane.jpg|left|thumb|x150px|Simultaneous reconstruction]]
 
|valign="top"|[[Image:dobrina_bones-aorte.jpg|left|thumb|x150px|Aorta & skeleton]]
 
|}
 
<BR>
 
 
 
==Geological layers exploration in immersive environment==
 
{{Image_Cadre|Geolo_wb.jpg|left|x200px}}
 
{{Image_Cadre|Geolo_pilote.jpg|left|x200px}}
 
{{Image_Cadre|Geolo_select.jpg|left|x200px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
==Terrain editing in immersive environment==
 
{|
 
|valign="top"|[[Image:9N5A1819-small.jpg|left|thumb|x200px|Terrain editing]]
 
|valign="top"|[[Image:Rv_terrain.png|left|thumb|x200px|Terrain editing]]
 
|valign="top"|[[Image:CCubeTerrainEdit.jpg|left|thumb|x200px|CCube Menu]]
 
|valign="top"|[[Image:9N5A1823-small.jpg|left|thumb|x200px|Terrain editing]]
 
|}
 
<BR>
 
 
 
==Multi-resolution edition in immersive environment==
 
{{Image_Cadre|wbMultiRes1.jpg|left|x150px}}
 
{{Image_Cadre|wbMultiRes2.jpg|left|x150px}}
 
{{Image_Cadre|wbMultiRes3.jpg|left|x150px}}
 
{{Image_Cadre|9N5A1800-small.jpg|left|x150px}}
 
{{Image_Cadre|9N5A1803-small.jpg|left|x150px}}
 
{{Image_Cadre|9N5A1805-small.jpg|left|x150px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
== Digitization of a building with [[ExRealis]] ==
 
{{Image_Cadre|Fort.png|left|230px}}
 
{{Image_Cadre|FortBLAOverview1.png|left|230px}}
 
{{Image_Cadre|FortBLAOverview2.png|left|230px}}
 
<BR clear="all"/>
 
{{Image_Cadre|FortBLAOverview3.png|left|230px}}
 
{{Image_Cadre|FortBLACloseup1.png|left|230px}}
 
{{Image_Cadre|FortBLACloseup2.png|left|230px}}
 
<BR clear="all"/>
 
<BR>
 
 
 
 
 
[[fr:GALERIE APERCU]]
 

Revision as of 11:39, 16 September 2022

Contraintes et Preuves

Modélisation et Interaction

Apparence et Mouvement