Tuesday, 26

» Rendering I

09.00 - 10.30 Room: Amphithéâtre Royal Session Chair: Michael Wimmer

» Learning for rendering

09.00 - 10.30 Room: Salle 1 Session Chair: Tobias Ritschel
  • Abstract: Differentiable rendering (DR) enables various computer graphics and computer vision applications through gradient-based optimization with derivatives of the rendering equation. Most rasterization-based approaches are built on general-purpose automatic differentiation (AD) libraries and DR-specific modules handcrafted using CUDA. Such a system design mixes DR algorithm implementation and algorithm building blocks, resulting in hardware dependency and limited performance. In this paper, we present a practical hardware-agnostic differentiable renderer called Dressi, which is based on a new full AD design. The DR algorithms of Dressi are fully written in our Vulkan-based AD for DR, Dressi-AD, which supports all primitive operations for DR. Dressi-AD and our inverse UV technique inside it bring hardware independence and acceleration by graphics hardware. Stage packing, our runtime optimization technique, can adapt hardware constraints and efficiently execute complex computational graphs of DR with reactive cache considering the render pass hierarchy of Vulkan. HardSoftRas, our novel rendering process, is designed for inverse rendering with a graphics pipeline. Under the limited functionalities of the graphics pipeline, HardSoftRas can propagate the gradients of pixels from the screen space to far-range triangle attributes. Our experiments and applications demonstrate that Dressi establishes hardware independence, high-quality and robust optimization with fast speed, and photorealistic rendering. Download here / MetaData

  • Abstract: Augmented reality applications have rapidly spread across online retail platforms and social media, allowing consumers to virtually try-on a large variety of products, such as makeup, hair dying, or shoes. However, parametrizing a renderer to synthesize realistic images of a given product remains a challenging task that requires expert knowledge. While recent work has introduced neural rendering methods for virtual try-on from example images, current approaches are based on large generative models that cannot be used in real-time on mobile devices. This calls for a hybrid method that combines the advantages of computer graphics and neural rendering approaches. In this paper, we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine. Our method leverages self-supervised learning and does not require labeled training data, which makes it extendable to many virtual try-on applications. Furthermore, most augmented reality renderers are not differentiable in practice due to algorithmic choices or implementation constraints to reach real-time on portable devices. To relax the need for a graphics-based differentiable renderer in inverse graphics problems, we introduce a trainable imitator module. Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable renderer. We propose a novel rendering sensitivity loss to train the imitator, which ensures that the network learns an accurate and continuous representation for each rendering parameter. Automatically learning a differentiable renderer, as proposed here, could be beneficial for various inverse graphics tasks. Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image on social media. It can also be used by computer graphics artists to automatically create realistic rendering from a reference product image. Download here / MetaData

  • Abstract: Deep learning for image processing typically treats input imagery as pixels in some color space. This paper proposes instead to learn from program traces of procedural fragment shaders – programs that generate images. At each pixel, we collect the intermediate values computed at program execution, and these data form the input to the learned model. We investigate this learning task for a variety of applications: our model can learn to predict a low-noise output image from shader programs that exhibit sampling noise; this model can also learn from a simplified shader program that approximates the reference solution with less computation, as well as learn the output of postprocessing filters like defocus blur and edge-aware sharpening. Finally we show that the idea of learning from program traces can even be applied to non-imagery simulations of flocks of boids. Our experiments on a variety of shaders show quantitatively and qualitatively that models learned from program traces outperform baseline models learned from RGB color augmented with hand-picked shader-specific features like normals, depth, and diffuse and specular color. We also conduct a series of analyses that show certain features within the trace are more important, and even learning from a small subset of the trace outperforms the baselines. Download here / MetaData

» Computational photography

11.00 - 12.30 Room: Amphithéâtre Royal Session Chair: Karol Myszkowski
  • Abstract: We present ZoomShop, a photographic composition editing tool for adjusting relative size, position, and foreshortening of scene elements. Given an image and corresponding depth map as input, ZoomShop combines a novel non-linear camera model and a depth-aware image warp to reproject and deform the image. Users can isolate objects by selecting depth ranges and adjust their scale and foreshortening, which controls the paths of the camera rays through the scene. Users can also select 2D image regions and translate them, which determines the objective function in the image warp optimization. We demonstrate that ZoomShop can be used to achieve useful compositional goals, such as making a distant object more prominent while preserving foreground scenery, or making objects both larger and closer together so they still fit in the frame. Download here / MetaData

  • Abstract: High Dynamic Range (HDR) content is becoming ubiquitous due to the rapid development of capture technologies. Nevertheless, the dynamic range of common display devices is still limited, therefore tone mapping (TM) remains a key challenge for image visualization. Recent work has demonstrated that neural networks can achieve remarkable performance in this task when compared to traditional methods, however, the quality of the results of these learning-based methods is limited by the training data. Most existing works use as training set a curated selection of best-performing results from existing traditional tone mapping operators (often guided by a quality metric), therefore, the quality of newly generated results is fundamentally limited by the performance of such operators. This quality might be even further limited by the pool of HDR content that is used for training. In this work we propose a learning-based self-supervised tone mapping operator that is trained at test time specifically for each HDR image and does not need any data labeling. The key novelty of our approach is a carefully designed loss function built upon fundamental knowledge on contrast perception that allows for directly comparing the content in the HDR and tone mapped images. We achieve this goal by reformulating classic VGG feature maps into feature contrast maps that normalize local feature differences by their average magnitude in a local neighborhood, allowing our loss to account for contrast masking effects. We perform extensive ablation studies and exploration of parameters and demonstrate that our solution outperforms existing approaches with a single set of fixed parameters, as confirmed by both objective and subjective metrics. Download here / MetaData

  • CGF

    Abstract:

» Modeling and editing I

11.00 - 12.30 Room: Salle 1 Session Chair: Oliver Deussen
  • Abstract: Digital terrains are a foundational element in the computer-generated depiction of natural scenes. Given the variety and complexity of real-world landforms, there is a need for authoring solutions that achieve perceptually realistic outcomes without sacrificing artistic control. In this paper, we propose setting aside the elevation domain in favour of modelling in the gradient domain. Such a slope-based representation is height independent and allows a seamless blending of disparate landforms from procedural, simulation, and real-world sources. For output, an elevation model can always be recovered using Poisson reconstruction, which can include Dirichlet conditions to constrain the elevation of points and curves. In terms of authoring our approach has numerous benefits. It provides artists with a complete toolbox, including: cut-and-paste operations that support warping as needed to fit the destination terrain, brushes to modify region characteristics, and sketching to provide point and curve constraints on both elevation and gradient. It is also a unifying representation that enables the inclusion of tools from the spectrum of existing procedural and simulation methods, such as painting localised high-frequency noise or hydraulic erosion, without breaking the formalism. Finally, our constrained reconstruction is GPU optimized and executes in real-time, which promotes productive cycles of iterative authoring. Download here / MetaData

  • Abstract: Creative processes of artists often start with hand-drawn sketches illustrating an object. Pre-visualizing these keyframes is especially challenging when applied to volumetric materials such as smoke. The authored 3D density volumes must capture realistic flow details and turbulent structures, which is highly non-trivial and remains a manual and time-consuming process. We therefore present a method to compute a 3D smoke density field directly from 2D artist sketches, bridging the gap between early-stage prototyping of smoke keyframes and pre-visualization. From the sketch inputs, we compute an initial volume estimate and optimize the density iteratively with an updater CNN. Our differentiable sketcher is embedded into the end-to-end training, which results in robust reconstructions. Our training data set and sketch augmentation strategy are designed such that it enables general applicability. We evaluate the method on synthetic inputs and sketches from artists depicting both realistic smoke volumes and highly non-physical smoke shapes. The high computational performance and robustness of our method at test time allows interactive authoring sessions of volumetric density fields for rapid prototyping of ideas by novice users. Download here / MetaData

  • Abstract: We propose an interactive method to edit a discrete Chebyshev net, which is a quad mesh with edges of the same length. To ensure that the edited mesh is always a discrete Chebyshev net, the maximum difference of all edge lengths should be zero during the editing process. Hence, we formulate an objective function using ℓp-norm (p > 2) to force the maximum length deviation to approach zero in practice. To optimize the nonlinear and non-convex objective function interactively and efficiently, we develop a novel second-order solver. The core of the solver is to construct a new convex majorizer for our objective function to achieve fast convergence. We present two acceleration strategies to further reduce the optimization time, including adaptive p change and adaptive variables reduction. A large number of experiments demonstrate the capability and feasibility of our method for interactively editing complex discrete Chebyshev nets. Download here / MetaData

» Animation and motion capture

15.30 - 17.00 Room: Amphithéâtre Royal Session Chair: Matthias Niessner
  • Abstract: An abundance of older, as well as recent work exists at the intersection of computer vision and computer graphics on accurate estimation of dynamic facial landmarks with applications in facial animation, emotion recognition, and beyond. However, only a few publications exist that optimize the actual layout of facial landmarks to ensure an optimal trade-off between compact layouts and detailed capturing. At the same time, we observe that applications like social games prefer simplicity and performance over detail to reduce the computational budget especially on mobile devices. Other common attributes of such applications are predefined low-dimensional models to animate and a large, diverse user-base. In contrast to existing methods that focus on creating person-specific facial landmarks, we suggest to derive application-specific facial landmarks. We formulate our optimization method on the widely adopted blendshape model. First, a score is defined suitable to compute a characteristic landmark for each blendshape. In a following step, we optimize a global function, which mimics merging of similar landmarks to one. The optimization is solved in less than a second using integer linear programming and guarantees a globally optimal solution to an NP-hard problem. Our application-specific approach is faster and fundamentally different to previous, actor-specific methods. Resulting layouts are more similar to empirical layouts. Compared to empirical landmarks, our layouts require only a fraction of landmarks to achieve the same numerical error when reconstructing the animation from landmarks. The method is compared against previous work and tested on various blendshape models, representing a wide spectrum of applications. Download here / MetaData

  • Abstract: Cartoons and animation domain videos have very different characteristics compared to real-life images and videos. In addition, this domain carries a large variability in styles. Current computer vision and deep-learning solutions often fail on animated content because they were trained on natural images. In this paper we present a method to refine a semantic representation suitable for specific animated content. We first train a neural network on a large-scale set of animation videos and use the mapping to deep features as an embedding space. Next, we use self-supervision to refine the representation for any specific animation style by gathering many examples of animated characters in this style, using a multi-object tracking. These examples are used to define triplets for contrastive loss training. The refined semantic space allows better clustering of animated characters even when they have diverse manifestations. Using this space we can build dictionaries of characters in an animation videos, and define specialized classifiers for specific stylistic content (e.g., characters in a specific animation series) with very little user effort. These classifiers are the basis for automatically labeling characters in animation videos. We present results on a collection of characters in a variety of animation styles. Download here / MetaData

  • Abstract: Synthesizing novel views of dynamic humans from stationary monocular cameras is a specialized but desirable setup. This is particularly attractive as it does not require static scenes, controlled environments, or specialized capture hardware. In contrast to techniques that exploit multi-view observations, the problem of modeling a dynamic scene from a single view is significantly more under-constrained and ill-posed. In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function. We learn the proposed representation by optimizing for a dynamic scene that minimizes the total rendering error, over all the observed images. At the heart of our work lies a carefully designed optimization scheme, which includes a dedicated initialization step and is constrained by a motion consensus regularization on the estimated motion flow. We extensively evaluate MoCo-Flow on several datasets that contain human motions of varying complexity, and compare, both qualitatively and quantitatively, to several baselines and ablated variations of our methods, showing the efficacy and merits of the proposed approach. Pretrained model, code, and data will be released for research purposes upon paper acceptance. Download here / MetaData

Wednesday, 27

» Appearance and shading

09.00 - 10.30 Room: Amphithéâtre Royal Session Chair: George Drettakis,
  • Abstract: We propose a hybrid method to reconstruct a physically-based spatially varying BRDF from a single high resolution picture of an outdoor surface captured under natural lighting conditions with any kind of camera device. Relying on both deep learning and explicit processing, our PBR material acquisition handles the removal of shades, projected shadows and specular highlights present when capturing a highly irregular surface and enables to properly retrieve the underlying geometry. To achieve this, we train two cascaded U-Nets on physically-based materials, rendered under various lighting conditions, to infer the spatiallyvarying albedo and normal maps. Our network processes relatively small image tiles (512_x0002_512 pixels) and we propose a solution to handle larger image resolutions by solving a Poisson system across these tiles. We complete this pipeline with analytical solutions to reconstruct height, roughness and ambient occlusion. Download here / MetaData

  • Abstract:

  • CGF

    Abstract: We propose a relighting method for outdoor images. Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image while also accounting for shading and global effects such the sun light color and clouds. Previous solutions for this problem rely on reconstructing occluder geometry, e. g., using multi-view stereo, which requires many images of the scene. Instead, in this work we make use of a noisy off-the-shelf single-image depth map estimation as a source of geometry. Whilst this can be a good guide for some lighting effects, the resulting depth map quality is insufficient for directly ray-tracing the shadows. Addressing this, we propose a learned image space ray-marching layer that converts the approximate depth map into a deep 3D representation that is fused into occlusion queries using a learned traversal. Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input. Download here / MetaData

» Human animation and topology

09.00 - 10.30 Room: Salle 1 Session Chair: Damien Rohmer
  • Abstract: Parametric 3D shape models are heavily utilized in computer graphics and vision applications to provide priors on the observed variability of an object’s geometry (e.g., for faces). Original models were linear and operated on the entire shape at once. They were later enhanced to provide localized control on different shape parts separately. In deep shape models, nonlinearity was introduced via a sequence of fully-connected layers and activation functions, and locality was introduced in recent models that use mesh convolution networks. As common limitations, these models often dictate, in one way or another, the allowed extent of spatial correlations and also require that a fixed mesh topology be specified ahead of time. To overcome these limitations, we present Shape Transformers, a new nonlinear parametric 3D shape model based on transformer architectures. A key benefit of this new model comes from using the transformer’s self-attention mechanism to automatically learn nonlinear spatial correlations for a class of 3D shapes. This is in contrast to global models that correlate everything and local models that dictate the correlation extent. Our transformer 3D shape autoencoder is a better alternative to mesh convolution models, which require specially-crafted convolution, and down/up-sampling operators that can be difficult to design. Our model is also topologically independent: it can be trained once and then evaluated on any mesh topology, unlike most previous methods. We demonstrate the application of our model to different datasets, including 3D faces, 3D hand shapes and full human bodies. Our experiments demonstrate the strong potential of our Shape Transformer model in several applications in computer graphics and vision. Download here / MetaData

  • CGF

    Abstract:

  • Abstract: Simulating crowds requires controlling a very large number of trajectories of characters and is usually performed using crowd steering algorithms. The question of choosing the right algorithm with the right parameter values is of crucial importance given the large impact on the quality of results. In this paper, we study the performance of a number of steering policies (i.e., simulation algorithm and its parameters) in a variety of contexts, resorting to an existing quality function able to automatically evaluate simulation results. This analysis allows us to map contexts to the performance of steering policies. Based on this mapping, we demonstrate that distributing the best performing policies among characters improves the resulting simulations. Furthermore, we also propose a solution to dynamically adjust the policies, for each agent independently and while the simulation is running, based on the local context each agent is currently in. We demonstrate significant improvements of simulation results compared to previous work that would optimize parameters once for the whole simulation, or pick an optimized, but unique and static, policy for a given global simulation context. Download here / MetaData

» Geometry

11.00 - 12.30 Room: Amphithéâtre Royal Session Chair: Julie Digne

» Meshes

11.00 - 12.30 Room: Salle 1 Session Chair: Stefan Ohrhallinger

» Texture

16.00 - 17.30 Room: Amphithéâtre Royal Session Chair: Vladilav Golyanik
  • Abstract: Urban procedural modeling has benefited from recent advances in deep learning and computer graphics. However, few, if any, approaches have automatically produced procedural building roof models from a single overhead satellite image. Large-scale roof modeling is important for a variety of applications in urban content creation and in urban planning (e.g., solar panel planning, heating/cooling/rainfall modeling). While the allure of modeling only from satellite images is clear, unfortunately structures obtained from the satellite images are often in low-resolution, noisy and heavily occluded, thus getting a clean and complete view of urban structures is difficult. In this paper, we present a framework that exploits the inherent structure present in man-made buildings and roofs by explicitly identifying the compact space of potential building shapes and roof structures. Then, we utilize this relatively compact space with a two-component solution combining procedural modeling and deep learning. Specifically, we use a building decomposition component to separate the building into roof parts and predict regularized building footprints in a procedural format, and use a roof ridge detection component to refine the individual roof parts by estimating the procedural roof ridge parameters. Our qualitative and quantitative assessments over multiple satellite datasets show that our method outperforms various state-of-the-art methods. Download here / MetaData

  • Abstract: Semantic segmentation is a difficult task even when trained in a supervised manner on photographs. In this paper, we tackle the problem of semantic segmentation of artistic paintings, an even more challenging task because of a much larger diversity in colors, textures, and shapes and because there are no ground truth annotations available for segmentation. We propose an unsupervised method for semantic segmentation of paintings using domain adaptation. Our approach creates a training set of pseudo-paintings in specific artistic styles by using style-transfer on the PASCAL VOC 2012 dataset, and then applies domain confusion between PASCAL VOC 2012 and real paintings. These two steps build on a new dataset we gathered called DRAM (Diverse Realism in Art Movements) composed of figurative art paintings from four movements, which are highly diverse in pattern, color, and geometry. To segment new paintings, we present a composite multi-domain adaptation method that trains on each sub-domain separately and composes their solutions during inference time. Our method provides better segmentation results not only on the specific artistic movements of DRAM, but also on other, unseen ones. We compare our approach to alternative methods and show applications of semantic segmentation in art paintings. Download here / MetaData

  • Abstract: Commonly used image-space layouts of shading points, such as used in deferred shading, are strictly view-dependent, which restricts efficient caching and temporal amortization. In contrast, texture-space layouts can represent shading on all surface points and can be tailored to the needs of a particular application. However, the best grouping of shading points—which we call a shading unit—in texture space remains unclear. Choices of shading unit granularity (how many primitives or pixels per unit) and in shading unit parametrization (how to assign texture coordinates to shading points) lead to different outcomes in terms of final image quality, overshading cost, and memory consumption. Among the possible choices, shading units consisting of larger groups of scene primitives, so-called meshlets, remain unexplored as of yet. In this paper, we introduce a taxonomy for analyzing existing texture-space shading methods based on the group size and parametrization of shading units. Furthermore, we introduce a novel texture-space layout strategy that operates on large shading units: the meshlet shading atlas. We experimentally demonstrate that the meshlet shading atlas outperforms previous approaches in terms of image quality, run-time performance and temporal upsampling for a given number of fragment shader invocations. The meshlet shading atlas lends itself to work together with popular cluster-based rendering of meshes with high geometric detail. Download here / MetaData

Thursday, 28

» Modeling and editing II

09.00 - 10.30 Room: Amphithéâtre Royal Session Chair: James Gain
  • Abstract: Procedural modeling allows for an automatic generation of large amounts of similar assets, but there is limited control over the generated output. We address this problem by introducing Automatic Differentiable Procedural Modeling (ADPM). The forward procedural model generates a final editable model. The user modifies the output interactively, and the modifications are transferred back to the procedural model as its parameters by solving an inverse procedural modeling problem. We present an auto-differentiable representation of the procedural model that significantly accelerates optimization. In ADPM the procedural model is always available, all changes are non-destructive, and the user can interactively model the 3D object while keeping the procedural representation. ADPM provides the user with precise control over the resulting model comparable to non-procedural interactive modeling. ADPM is node-based, and it generates hierarchical 3D scene geometry converted to a differentiable computational graph. Our formulation focuses on the differentiability of high-level primitives and bounding volumes of components of the procedural model rather than the detailed mesh geometry. Although this high-level formulation limits the expressiveness of user edits, it allows for efficient derivative computation and enables interactivity. We designed a new optimizer to solve for inverse procedural modeling. It can detect that an edit is under-determined and has degrees of freedom. Leveraging cheap derivative evaluation, it can explore the region of optimality of edits and suggest various configurations, all of which achieve the requested edit differently. We show our system’s efficiency on several examples, and we validate it by a user study. Download here / MetaData

  • Abstract: Modern CAD tools represent 3D designs not only as geometry, but also as a program composed of geometric operations, each of which depends on a set of parameters. Program representations enable meaningful and controlled shape variations via parameter changes. However, achieving desired modifications solely through parameter editing is challenging when CAD models have not been explicitly authored to expose select degrees of freedom in advance. We introduce a novel bidirectional editing system for 3D CAD programs. In addition to editing the CAD program, users can directly manipulate 3D geometry and our system infers parameter updates to keep both representations in sync. We formulate inverse edits as a set of constrained optimization objectives, returning plausible updates to program parameters that both match user intent and maintain program validity. Our approach implements an automatically differentiable domain-specific language for CAD programs, providing derivatives for this optimization to be performed quickly on any expressed program. Our system enables rapid, interactive exploration of a constrained 3D design space by allowing users to manipulate the program and geometry interchangeably during design iteration. While our approach is not designed to optimize across changes in geometric topology, we show it is expressive and performant enough for users to produce a diverse set of design variants, even when the CAD program contains a relatively large number of parameters. Download here / MetaData

  • CGF

    Abstract:

» Physics simulation

09.00 - 10.30 Room: Salle 1 Session Chair: Jos Stam
  • Abstract: We present an adaptively updated Lagrangian Material Point Method (A-ULMPM) to alleviate non-physical artifacts, such as the cell-crossing instability and numerical fracture, that plague state-of-the-art Eulerian formulations of MPM, while still allowing for large deformations that arise in fluid simulations. A-ULMPM spans MPM discretizations from total Lagrangian formulations to Eulerian formulations. We design an easy-to-implement physics-based criterion that allows A-ULMPM to update the reference configuration adaptively for measuring physical states, including stress, strain, interpolation kernels and their derivatives. For better efficiency and conservation of angular momentum, we further integrate the APIC [JSS_x0003_15] and MLS-MPM [HFG_x0003_18] formulations in A-ULMPM by augmenting the accuracy of velocity rasterization using both the local velocity and its first-order derivatives. Our theoretical derivations use a nodal discretized Lagrangian, instead of the weak form discretization in MLS-MPM [HFG_x0003_18], and naturally lead to a “modified” MLS-MPM in A-ULMPM, which can recover MLS-MPM using a completely Eulerian formulation. A-ULMPM does not require significant changes to traditional Eulerian formulations of MPM, and is computationally more efficient since it only updates interpolation kernels and their derivatives during large topology changes. We present end-to-end 3D simulations of stretching and twisting hyperelastic solids, viscous flows, splashing liquids, and multi-material interactions with large deformations to demonstrate the efficacy of our new method. Download here / MetaData

  • Abstract: This paper proposes a method for simulating liquids in large bodies of water by coupling together a water surface wave simulator with a 3D Navier-Stokes simulator. The surface wave simulation uses the equivalent sources method (ESM) to efficiently animate large bodies of water with precisely controllable wave propagation behavior. The 3D liquid simulator animates complex non-linear fluid behaviors like splashes and breaking waves using off-the-shelf simulators using FLIP or the level set method with semi-Lagrangian advection. We combine the two approaches by using the 3D solver to animate localized non-linear behaviors, and the 2D wave solver to animate larger regions with linear surface physics. We use the surface motion from the 3D solver as boundary conditions for 2D surface wave simulator, and we use the velocity and surface heights from the 2D surface wave simulator as boundary conditions for the 3D fluid simulation. We also introduce a novel technique for removing visual artifacts caused by numerical errors in 3D fluid solvers: we use experimental data to estimate the artificial dispersion caused by the 3D solver and we then carefully tune the wave speeds of the 2D solver to match it, effectively eliminating any differences in wave behavior across the boundary. To the best of our knowledge, this is the first time such a empirically driven error compensation approach has been used to remove coupling errors from a physics simulator. Our coupled simulation approach leverages the strengths of each simulation technique, animating large environments with seamless transitions between 2D and 3D physics. Download here / MetaData

  • Abstract: We introduce the first exact root parity counter for continuous collision detection (CCD). That is, our algorithm computes the parity (even or odd) of the number of roots of the cubic polynomial arising from a CCD query. We note that the parity is unable to differentiate between zero (no collisions) and the rare case of two roots (collisions). Our method does not have numerical parameters to tune, has a performance comparable to efficient approximate algorithms, and is exact. We test our approach on a large collection of synthetic tests and real simulations, and we demonstrate that it can be easily integrated into existing simulators. Download here / MetaData

» Rendering II

11.00 - 12.30 Room: Amphithéâtre Royal Session Chair: David Coeurjolly
  • Abstract: Recent advances in neural rendering indicate immense promise for architectures that learn light transport, allowing efficient rendering of global illumination effects once such methods are trained. The training phase of these methods can be seen as a form of pre-computation, which has a long standing history in Computer Graphics. In particular, Pre-computed Radiance Transfer (PRT) achieves real-time rendering by freezing some variables of the scene (geometry, materials) and encoding the distribution of others, allowing interactive rendering at runtime. We adopt the same configuration as PRT – global illumination of static scenes under dynamic environment lighting – and investigate different neural network architectures, inspired by the design principles and theoretical analysis of PRT. We introduce four different architectures, and show that those based on knowledge of light transport models and PRT-inspired principles improve the quality of global illumination predictions at equal training time and network size, without the need for high-end ray-tracing hardware. Download here / MetaData

  • Abstract: Rendering photo-realistic images using Monte Carlo path tracing often requires sampling a large number of paths to reach acceptable levels of noise. This is particularly the case when rendering participating media, that complexify light paths with multiple scattering events. Our goal is to accelerate the rendering of heterogeneous participating media by exploiting redundancy across views, for instance when rendering animated camera paths, motion blur in consecutive frames or multi-view images such as lenticular or light-field images. This poses a challenge as existing methods for sharing light paths across views cannot handle heterogeneous participating media and classical estimators are not optimal in this context. We address these issues by proposing three key ideas. First, we propose new volume shift mappings to transform light paths from one view to another within the recently introduced null-scattering framework, taking into account changes in density along the transformed path. Second, we generate a shared path suffix that best contributes to a subset of views, thus effectively reducing variance. Third, we introduce the multiple weighted importance sampling estimator that benefits from multiple importance sampling for combining sampling strategies, and from weighted importance sampling for reducing the variance due to non contributing strategies. We observed significant reuse when views largely overlap, with no visible bias and reduced variance compared to regular path tracing at equal time. Our method further readily integrates into existing volumetric path tracing pipelines. Download here / MetaData

  • Abstract: When rendering images using Spherical Harmonics (SH), the projection of a spherical function on the SH basis remains a computational challenge both for high-frequency functions and for emission functions from complex light sources. Recent works investigate efficient SH projection of the light field coming from polygonal and spherical lights. To further reduce the rendering time, instead of computing the SH coefficients at each vertex of a mesh or at each fragment on an image, it has been shown, for polygonal area light, that computing both the SH coefficients and their spatial gradients on a grid covering the scene allows the efficient and accurate interpolation of these coefficients at each shaded point. In this paper, we develop analytical recursive formulae to compute the spatial gradients of SH coefficients for spherical light. This requires the efficient computation of the spatial gradients of the SH basis function that we also derive. Compared to existing method for polygonal light, our method is faster, requires less memory and scales better with respect to the SH band limit. We also show how to approximate polygonal lights using spherical lights to benefit from our derivations. To demonstrate the effectiveness of our proposal, we integrate our algorithm in a shading system able to render fully dynamic scenes with several hundreds of spherical lights in real time. Download here / MetaData

» Topology

11.00 - 12.30 Room: Salle 1 Session Chair: Hamish Carr
  • Abstract: Spectral geometric methods have brought revolutionary changes to the field of geometry processing. Of particular interest is the study of the Laplacian spectrum as a compact, isometry and permutation-invariant representation of a shape. Some recent works show how the intrinsic geometry of a full shape can be recovered from its spectrum, but there are approaches that consider the more challenging problem of recovering the geometry from the spectral information of partial shapes. In this paper, we propose a possible way to fill this gap. We introduce a learning-based method to estimate the Laplacian spectrum of the union of partial non-rigid 3D shapes, without actually computing the 3D geometry of the union or any correspondence between those partial shapes. We do so by operating purely in the spectral domain and by defining the union operation between short sequences of eigenvalues. We show that the approximated union spectrum can be used as-is to reconstruct the complete geometry [MRC*19], perform region localization on a template [RTO*19] and retrieve shapes from a database, generalizing ShapeDNA [RWP06] to work with partialities. Working with eigenvalues allows us to deal with unknown correspondence, different sampling, and different discretizations (point clouds and meshes alike), making this operation especially robust and general. Our approach is data-driven and can generalize to isometric and non-isometric deformations of the surface, as long as these stay within the same semantic class (e.g., human bodies or horses), as well as to partiality artifacts not seen at training time. Download here / MetaData

  • Abstract: In this paper, we present a simple yet effective formulation called Coverage Axis for 3D shape skeletonization. Inspired by the set cover problem, our key idea is to cover all the surface points using as few inside medial balls as possible. This formulation inherently induces a compact and expressive approximation of the Medial Axis Transform (MAT) of a given shape. Different from previous methods that rely on local approximation error, our method allows a global consideration of the overall shape structure, leading to an efficient high-level abstraction and superior robustness to noise. Another appealing aspect of our method is its capability to handle more generalized input such as point clouds and poor-quality meshes. Extensive comparisons and evaluations demonstrate the remarkable effectiveness of our method for generating compact and expressive skeletal representation to approximate the MAT. Download here / MetaData

  • Abstract: Vectorization is a commonly used technique for converting raster images to vector format and has long been a research focus in computer graphics and vision. While a number of attempts have been made to extract the topology of line drawings and further convert them to vector representations, the existing methods commonly focused on resolving junctions composed of thin lines. They usually fail for line drawings composed of thick lines, especially at junctions. In this paper, we propose an automatic line drawing vectorization method that can reconstruct the topology of line drawings of arbitrary thickness. Our key observation is that no matter the lines are thin or thick, the boundaries of the lines always provide reliable hints for reconstructing the topology. For example, the boundaries of two continuous line segments at a junction are usually smoothly connected. By analyzing the continuity of boundaries, we can better analyze the topology at junctions. In particular, we first extract the skeleton of the input line drawing via thinning. Then we analyze the reliability of the skeleton points based on boundaries. Reliable skeleton points are preserved while unreliable skeleton points are reconstructed based on boundaries again. Finally, the skeleton after reconstruction is vectorized as the output. We apply our method on line drawings of various contents and styles. Satisfying results are obtained. Our method significantly outperforms existing methods for line drawings composed of thick lines. Download here / MetaData

» Visualization

15.30 - 17.00 Room: Amphithéâtre Royal Session Chair: George Pierre Bonneau
  • Abstract: We present a method to render massive brain tractograms in real time. Tractograms model the white matter architecture of the human brain using millions of 3D polylines (fibers), summing up to billions of segments. They are used by neurosurgeons before surgery as well as by researchers to better understand the brain. A typical raw dataset for a single brain represents dozens of gigabytes of data, preventing their interactive rendering.We address this challenge with a new GPU mesh shader pipeline based on a decomposition of the fiber set into compressed local representations that we call fiblets. Their spatial coherence is used at runtime to efficiently cull hidden geometry at the task shader stage while synthesizing the visible ones as polyline meshlets in a warp-scale parallel fashion at the mesh shader stage. As a result, our pipeline can feed a standard deferred shading engine to visualize the mesostructures of the brain with various classical rendering techniques, as well as simple interaction primitives. We demonstrate that our algorithm provides real-time framerates on very large tractograms that were out of reach for previous methods while offering a fiber-level granularity in both rendering and interaction. Download here / MetaData

  • Abstract: We present a technique for visualizing point clouds using a neural network. Our technique allows for an instant preview of any point cloud, and bypasses the notoriously difficult surface reconstruction problem or the need to estimate oriented normals for splat-based rendering. We cast the preview problem as a conditional image-to-image translation task, and design a neural network that translates point depth-map directly into an image, where the point cloud is visualized as though a surface was reconstructed from it. Furthermore, the resulting appearance of the visualized point cloud can be, optionally, conditioned on simple control variables (e.g., color and light). We demonstrate that our technique instantly produces plausible images, and can, on-the-fly effectively handle noise, non-uniform sampling, and thin surfaces sheets. Download here / MetaData

  • CGF

    Abstract:

Friday, 29

» 3D printing, fabrication

09.00 - 10.30 Room: Amphithéâtre Royal Session Chair: Amit Bermano
  • Abstract: We explore the optimization of closed space-filling curves under orientation objectives. By solidifying material along the closed curve, solid layers of 3D prints can be manufactured in a single continuous extrusion motion. The control over orientation enables the deposition to align with specific directions in different areas, or to produce a locally uniform distribution of orientations, patterning the solidified volume in a precisely controlled manner. Our optimization framework proceeds in two steps. First, we cast a combinatorial problem, optimizing Hamiltonian cycles within a specially constructed graph. We rely on a stochastic optimization process based on local operators that modify a cycle while preserving its Hamiltonian property. Second, we use the result to initialize a geometric optimizer that improves the smoothness and uniform coverage of the cycle while further optimizing for alignment and orientation objectives. Download here / MetaData

  • Abstract: We introduce a new mechanism for self-actuating deployable structures, based on printing a dense pattern of closely-spaced plastic ribbons on sheets of pre-stretched elastic fabric. We leverage two shape-changing effects that occur when such an assembly is printed and allowed to relax: first, the incompressible plastic ribbons frustrate the contraction of the fabric back to its rest state, forcing residual strain in the fabric and creating intrinsic curvature. Second, the differential compression at the interface between the plastic and fabric layers yields a bilayer effect in the direction of the ribbons, making each ribbon buckle into an arc at equilibrium state and creating extrinsic curvature. We describe an inverse design tool to fabricate lowcost, lightweight prototypes of freeform surfaces using the controllable directional distortion and curvature offered by this mechanism. The core of our method is a parameterization algorithm that bounds surface distortions along and across principal curvature directions, along with a pattern synthesis algorithm that covers a surface with ribbons to match the target distortions and curvature given by the aforementioned parameterization. We demonstrate the flexibility and accuracy of our method by fabricating and measuring a variety of surfaces, including nearly-developable surfaces as well as surfaces with positive and negative mean curvature, which we achieve thanks to a simple hardware setup that allows printing on both sides of the fabric. Download here / MetaData

  • Abstract: We study structural rigidity for assemblies with mechanical joints. Existing methods identify whether an assembly is structurally rigid by assuming parts are perfectly rigid. Yet, an assembly identified as rigid may not be that “rigid” in practice, and existing methods cannot quantify how rigid an assembly is. We address this limitation by developing a new measure, worst-case rigidity, to quantify the rigidity of an assembly as the largest possible deformation that the assembly undergoes for arbitrary external loads of fixed magnitude. Computing worst-case rigidity is non-trivial due to non-rigid parts and different joint types. We thus formulate a new computational approach by encoding parts and their connections into a stiffness matrix, in which parts are modeled as deformable objects and joints as soft constraints. Based on this, we formulate worst-case rigidity analysis as an optimization that seeks the worst-case deformation of an assembly for arbitrary external loads, and solve the optimization problem via an eigenanalysis. Furthermore, we present methods to optimize the geometry and topology of various assemblies to enhance their rigidity, as guided by our rigidity measure. In the end, we validate our method on a variety of assembly structures with physical experiments and demonstrate its effectiveness by designing and fabricating several structurally rigid assemblies. Download here / MetaData

» Simulation of clothes and crowds

09.00 - 10.30 Room: Salle 1 Session Chair: Andreas Aristidou
  • Abstract: The real-time simulation of human crowds has many applications. In a typical crowd simulation, each person (‘agent’) in the crowd moves towards a goal while adhering to local constraints. Many algorithms exist for specific local ‘steering’ tasks such as collision avoidance or group behavior. However, these do not easily extend to completely new types of behavior, such as circling around another agent or hiding behind an obstacle. They also tend to focus purely on an agent’s velocity without explicitly controlling its orientation. This paper presents a novel sketch-based method for modelling and simulating many steering behaviors for agents in a crowd. Central to this is the concept of an interaction field (IF): a vector field that describes the velocities or orientations that agents should use around a given ‘source’ agent or obstacle. An IF can also change dynamically according to parameters, such as the walking speed of the source agent. IFs can be easily combined with other aspects of crowd simulation, such as collision avoidance. Using an implementation of IFs in a real-time crowd simulation framework, we demonstrate the capabilities of IFs in various scenarios. This includes game-like scenarios where the crowd responds to a user-controlled avatar. We also present an interactive tool that computes an IF based on input sketches. This IF editor lets users intuitively and quickly design new types of behavior, without the need for programming extra behavioral rules. We thoroughly evaluate the efficacy of the IF editor through a user study, which demonstrates that our method enables non-expert users to easily enrich any agent-based crowd simulation with new agent interactions. Download here / MetaData

  • Abstract: Kinesthetic garments provide physical feedback on body posture and motion through tailored distributions of reinforced material. Their ability to selectively stiffen a garment’s response to specific motions makes them appealing for rehabilitation, sports, robotics, and many other application fields. However, finding designs that distribute a given amount of reinforcement material to maximally stiffen the response to specified motions is a challenging problem. In this work, we propose an optimization-driven approach for automated design of reinforcement patterns for kinesthetic garments. Our main contribution is to cast this design task as an on-body topology optimization problem. Our method allows designers to explore a continuous range of designs corresponding to various amounts of reinforcement coverage. Our model captures both tight contact and lift-off separation between cloth and body. We demonstrate our method on a variety of reinforcement design problems for different body sites and motions. Optimal designs lead to a two- to threefold improvement in performance in terms of energy density. A set of manufactured designs were consistently rated as providing more resistance than baselines in a comparative user study. Download here / MetaData

  • Abstract: We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction. Our approach is general and can handle cloth or obstacles represented by triangle meshes with arbitrary topologies.We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space. Our network can predict the target 3D cloth mesh deformation based on the initial state of the cloth mesh template and the target obstacle mesh. Our approach can handle complex cloth meshes with up to 100K triangles and scenes with various objects corresponding to SMPL humans, non-SMPL humans or rigid bodies. In practice, our approach can be used to generate plausible cloth simulation at 30􀀀45 fps on an NVIDIA GeForce RTX 3090 GPU. We highlight its benefits over prior learning-based methods and physicallybased cloth simulators. Download here / MetaData