Posters Program
Tuesday, 26 | Wednesday, 27 | Thursday, 28 | Friday, 29 |
---|---|---|---|
14.30 - 15.30 » Poster session |
15.00 - 16.00 » Poster session |
14.30 - 15.30 » Poster session |
-
Geometric deformation for reducing optic flow and cybersickness dose value in VR, Ruding LOU, Richard So and Dominique Bechmann
Abstract: Today virtual reality technologies is becoming more and more widespread and has found strong applications in various domains. However, the fear to experience motion sickness is still an important barrier for VR users. Instead of moving physically, VR users experience virtual locomotion but their vestibular systems do not sense the self-motion that are visually induced by immersive displays. The mismatch in visual and vestibular senses causes sickness. Previous solutions actively reduce user’s field-of-view and alter their navigation. In this paper we propose a passive approach that temporarily deforms geometrically the virtual environment according to user navigation. Two deformation methods have been prototyped and tested. The first one reduces the perceived optic flow which is the main cause of visually induced motion sickness. The second one encourages users to adopt smoother trajectories and reduce the cybersickness dose value. Both methods have the potential to be applied generically. Download here / MetaData / Video
-
RGB-D Neural Radiance Fields: Local Sampling for Faster Training, Arnab Dey and Andrew Comport
Abstract: Learning a 3D representation of a scene has been a challenging problem for decades in computer vision. Recent advancements in implicit neural representation from images using neural radiance fields(NeRF) have shown promising results. Some of the limitations of previous NeRF based methods include longer training time, and inaccurate underlying geometry. The proposed method takes advantage of RGB-D data to reduce training time by leveraging depth sensing to improve local sampling. This paper proposes a depth-guided local sampling strategy and a smaller neural network architecture to achieve faster training time without compromising quality. Download here / MetaData / Video
-
Fixed-radius near neighbors searching for 2D simulations on the GPU using Delaunay triangulations, Heinich Porro, Benoît Crespin, Cristobal Navarro and Nancy Hitschfeld-Kahler
Abstract: We propose to explore a GPU solution to the fixed-radius nearest-neighbor problem in 2D based on Delaunay triangulations. This problem is crucial for many particle-based simulation techniques for collision detection or momentum exchange between particles. Our method computes the neighborhood of each particle at each iteration without neighbor lists or grids, using a Delaunay triangulation whose consistency is preserved by edge flipping. We study how this approach compares to a grid-based implementation on a flocking simulation with variable parameters. Download here / MetaData / Video
-
Splash in a Flash: Sharpness-aware minimization for efficient liquid splash simulation, Vishrut Jetly, Hikaru Ibayashi and Aiichiro Nakano
Abstract: We present sharpness-aware minimization (SAM) for fluid dynamics which can efficiently learn the plausible dynamics of liquid splashes. Due to its ability to achieve robust and generalizing solutions, SAM efficiently converges to a parameter set that predicts plausible dynamics of elusive liquid splashes. Our training scheme requires 6 times smaller number of epochs to converge and, 4 times shorter wall-clock time. Our result shows that sharpness of loss function has a close connection to the plausibility of fluid dynamics and suggests further applicability of SAM to machine learning based fluid simulation. Download here / MetaData / Video
-
Consistent Multi- and Single-View HDR-Image Reconstruction from Single Exposures, Aditya Mohan, Jing Zhang, Rémi Cozot and Celine Loscos
Abstract: We propose a CNN-based approach for reconstructing HDR images from just a single exposure. It predicts the saturated areas of LDR images and then blends the linearized input with the predicted outputs. Two loss functions are used: the Mean Absolute Error and the Multi-Scale Structural Similarity Index. The choice of these loss functions allows us to outperform previous algorithms in the reconstructed dynamic range. Once the network trained, we input multi-view images to it to output multi-view coherent images. Download here / MetaData / Video
-
A First Step Towards the Inference of Geological Topological Operations, Romain Pascual, Hakim Belhaouari, Agnès Arnould and Pascale Le Gall
Abstract: Procedural modeling enables building complex geometric objects and scenes in a wide panel of applications. The traditional approach relies on the sequential application of a reduced set of construction rules. We offer to automatically generate new topological rules based on an initial object and the expected result of the future operation. Non-expert users can thereby develop their own operations. We exploited our approach for the modeling of the geological subsoil. Download here / MetaData / Video
-
Multimodal Early Raw Data Fusion for Environment Sensing in Automotive Applications, Marcelo Eduardo Pederiva
Abstract: Autonomous Vehicles became every day closer to becoming a reality in ground transportation. Computational advancement has enabled powerful methods to process large amounts of data required to drive on streets safely. The fusion of multiple sensors presented in the vehicle allows building accurate world models to improve autonomous vehicles’ navigation. Among the current techniques, the fusion of LIDAR, RADAR, and Camera data by Neural Networks has shown significant improvement in object detection and geometry and dynamic behavior estimation. Main methods propose using parallel networks to fuse the sensors’ measurement, increasing complexity and demand for computational resources. The fusion of the data using a single neural network is still an open question and the project’s main focus. The aim is to develop a single neural network architecture to fuse the three types of sensors and evaluate and compare the resulting approach with multi-neural network proposals. Download here / MetaData / Video
-
Fast and fine disparity reconstruction for wide-baseline camera arrays with deep neural networks, Théo Barrios, Julien Gerhards, Stéphanie Prévost and Celine Loscos
Abstract: Recently, disparity-based 3D reconstruction for stereo camera pairs and light field cameras have been greatly improved with the uprising of deep learning-based methods. However, only few of these approaches address wide-baseline camera arrays which require specific solutions. In this paper, we introduce a deep-learning based pipeline for multi-view disparity inference from images of a wide-baseline camera array. The network builds a low-resolution disparity map and retains the original resolution with an additional up scaling step. Our solution successfully answers to wide-baseline array configurations and infers disparity for full HD images at interactive times, while reducing quantification error compared to the state of the art. Download here / MetaData / Video
-
3D Human Shape and Pose from a Single Depth Image with Deep Dense Correspondence Enabled Model Fitting, Xiaofang Wang, Adnane Boukhayma, Stéphanie Prévost, Eric Desjardin, Celine Loscos and Franck Multon
Abstract: We propose a two-stage hybrid method, with no initialization, for 3D human shape and pose estimation from a single depth image, combining the benefits of deep learning and optimization. First, a convolutional neural network predicts pixel-wise dense semantic correspondences to a template geometry, in the form of body part segmentation labels and normalized canonical geometry vertex coordinates. Using these two outputs, pixel-to-vertex correspondences are computed in a six-dimensional embedding of the template geometry through nearest neighbor. Second, a parametric shape model (SMPL) is fitted to the depth data by minimizing vertex distances to the input. Extensive evaluation on both real and synthetic human shape in motion datasets shows that our method yields quantitatively and qualitatively satisfactory results and state-of-the-art reconstruction errors. Download here / MetaData
-
Seamless Compressed Textures, Andrea Maggiordomo and Marco Tarini
Abstract: We present an algorithm to hide discontinuity artifacts at seams in GPU compressed textures. Texture mapping requires UV-maps, and UV-maps (in general) require texture seams; texture seams (in general) cause small visual artifacts in rendering; these can be prevented by careful, slight modifications a few texels around the seam. Unfortunately, GPU-based texture compression schemes are lossy and introduce their own slight modifications of texture values, nullifying that effort. The result is that texture compression may reintroduce the visual artefacts at seams. We modify a standard texture compression algorithm to make it aware of texture seams, resulting in compressed textures that still prevent the seam artefacts. Download here / MetaData / Video
-
Stroke based painterly inbetweening, Nicolas Barroso, Amélie Fondevilla and David Vanderhaeghe
Abstract: Creating a 2D animation with visible strokes is a tedious and time consuming task for an artist. Computer aided animation usually focus on cartoon stylized rendering, or is built from an automatic process as 3D animations stylization, loosing the painterly look and feel of hand made animation. We propose to simplify the creation of stroke-based animations: from a set of key frames, our methods automatically generates intermediate frames to depict the animation. Each intermediate frame looks as it could have been drawn by an artist, using the same high level stroke based representation as key frame, and in succession they display the subtle temporal incoherence usually found in hand-made animations. Download here / MetaData / Video
-
Neural Denoising for Spectral Monte Carlo Rendering, Robin Rouphael, Mathieu Noizet, Stéphanie Prévost, Hervé Deleau, Luiz-Angelo Steffenel and Laurent Lucas
Abstract: Spectral Monte Carlo (MC) rendering is still to be largely adopted partially due to the specific noise, called color noise, induced by wavelength-dependent phenomenons. Motivated by the recent advances in Monte Carlo noise reduction using Deep Learning, we propose to apply the same approach to color noise. Our implementation and training managed to reconstruct a noise-free output while conserving high-frequency details despite a loss of contrast. To address this issue, we designed a three-step pipeline using the contribution of a secondary denoiser to obtain high-quality results. Download here / MetaData / Video
-
Transfer Textures for Fast Precomputed Radiance Transfer, Dhawal Sirikonda, Aakash KT and P. J. Narayanan
Abstract: Precomputed Radiance Transfer (PRT) can achieve high-quality renders of glossy materials at real-time framerates. PRT involves precomputing a k-dimensional transfer vector or a k×k- matrix of Spherical Harmonic (SH) coefficients at specific points for a scene depending on whether the material is diffuse or glossy respectively. Most prior art precomputes values at vertices of the mesh and interpolates color for interior points. They require finer mesh tessellations for high-quality renders. In this work, we introduce transfer textures for decoupling mesh resolution from transfer storage and sampling specifically benefiting the glossy renders. Dense sampling of the transfer is possible on the fragment shader while rendering with the use of transfer textures for both diffuse as well as glossy materials, even with a low tessellation. This simultaneously provides high render quality and frame rates. Download here / MetaData / Video
-
SIG-based Curve Reconstruction, Diana Marin, Stefan Ohrhallinger and Michael Wimmer
Abstract: We introduce a new method to compute the shape of an unstructured set of two-dimensional points. The algorithm exploits the to-date rarely used proximity-based graph called spheres-of-influence graph (SIG). We filter edges from the Delaunay triangulation belonging to the SIG as an initial graph and apply some additional processing plus elements from the Connect2D algorithm. This combination already shows improvements in curve reconstruction, yielding the best reconstruction accuracy compared to state-of-the-art algorithms from a recent comprehensive benchmark, and offers potential of further improvements. Download here / MetaData / Video
-
Time series AMR data representation for out-of-core interactive visualization, Welcome Alexandre-Barff, Hervé Deleau, Jonathan Sarton, Franck Ledoux and Laurent Lucas
Abstract: Time-varying Adaptive Mesh Refinement (AMR) data have become an essential representation for 3D numerical simulations in many scientific fields. This observation is even more relevant considering that the data volumetry has increased significantly, reaching petabytes, hence largely exceeding the memory capacities of the most recent graphics hardware. Therefore, the question is how to access these massive data - AMR time series in particular - for interactive visualization purposes, without cracks, artifacts or latency. In this paper, we present a time-varying AMR data representation to enable a possible fully GPU-based out-of-core approach. We propose to convert the input data initially expressed as regular voxel grids into a set of AMR bricks uniquely identified by a 3D Hilbert’s curve and store them in mass storage. Download here / MetaData / Video
-
View dependent decompression for web-based massive triangle meshes visualisation, Alice Cecchin, Paul Du, Mickaël Pastor, Asmaâ Agouzoul
Abstract: We introduce a framework extending an existing progressive compression-decompression algorithm for 3D triangular meshes. First, a mesh is partitioned. Each resulting part is compressed, then joined with one of its neighbours. These steps are repeated following a binary tree of operations, until a single compressed mesh remains. Decompressing the mesh involves progressively performing those steps in reverse, per node, and locally, by selecting the branch of the tree to explore. This method creates a compact and lossless representation of the model that allows its progressive and local reconstruction. Previously unprocessable meshes can be visualized on the web and mobile devices using this technique. Download here / MetaData / Video
-
Modeling and Enhancement of LiDAR Point Clouds from Natural Scenarios, José Antonio Collado Araque, Alfonso López Ruiz, J. Roberto Jiménez, Lidia M. Ortega, Francisco Feito and Juan Manuel Jurado
Abstract: The generation of realistic natural scenarios is a longstanding and ongoing challenge in Computer Graphics. A common source of real-environmental scenarios is open point cloud datasets acquired by LiDAR (Laser Imaging Detection and Ranging) devices. However, these data have low density and are not able to provide sufficiently detailed environments. In this study, we propose a method to reconstruct real-world environments based on data acquired from LiDAR devices that overcome this limitation and generate rich environments, including ground and high vegetation. Additionally, our proposal segments the original data to distinguish among different kinds of trees. The results show that the method is capable of generating realistic environments with the chosen density and including specimens of each of the identified tree types. Download here / MetaData / Video
-
Hermite interpolation of heightmaps, Róbert Bán and Gábor Valasek
Abstract: Heightmaps are ubiquitous in real-time computer graphics. They are used to describe geometric detail over an underlying coarser surface. Various techniques, such as parallax occlusion mapping and relief mapping, use heightmap textures to impose mesostructural details over macrostructural elements without increasing the actual complexity of the rendered geometries. We aim to improve the quality of the fine resolution surface by incorporating the gradient of the original function into the sampling procedure. The traditional representation consists of simple height values stored on a regular grid. During rendering, bilinear filtering is applied. We propose to store the partial derivatives with the height values and use Hermite interpolation between the samples. This guarantees a globally C1 continuous heightfield instead of the C0 -continuity of bilinear filtering. Moreover, incorporating higher order information via partial derivatives allows us to use lower resolution heightmaps while retaining the appearance of a higher resolution map. In parallax mapping, surface normals are often stored alongside the height values, as such, our method does not require additional storage, since normals and partial derivatives can be calculated from one another. The exact normals of the reconstructed cubic Hermite surface can also be calculated, resulting in a storage efficient replacement for normal mapping with richer visual appearance. Download here / MetaData / Video