AboutPeopleProjectsCoursesPublicationsJobs  
   
 

Arsène Pérard-Gayot

Saarland University
Computer Graphics Lab & Intel Visual Computing Institute
Saarland Informatics Campus, E1_1, room E11
66123 Saarbrücken
Germany
Phone: +49 681 302 3837
Fax: +49 681 302 3843
E-Mail: perard at cg.uni-saarland.de

Publication List

References
3.
Pérard-Gayot, Arsène, Weier, Martin, Membarth, Richard, Slusallek, Philipp, Leißa, Roland; Hack, Sebastian
RaTrace: Simple and Efficient Abstractions for BVH Ray Traversal Algorithms
Proceedings of 16th ACM SIGPLAN International Conference on Generative Programming: Concepts & Experiences (GPCE) , page 157-168.
October 2017

Keywords: Computer Graphics, Ray Tracing, Functional Programming, Domain-Specific Languages

Abstract: In order to achieve the highest possible performance, the ray traversal and intersection routines at the core of every high-performance ray tracer are usually hand-coded, heavily optimized, and implemented separately for each hardware platform—even though they share most of their algorithmic core. The results are implementations that heavily mix algorithmic aspects with hardware and implementation details, making the code non-portable and difficult to change and maintain. In this paper, we present a new approach that offers the ability to define in a functional language a set of conceptual, high-level language abstractions that are optimized away by a special compiler in order to maximize performance. Using this abstraction mechanism we separate a generic ray traversal and intersection algorithm from its low-level aspects that are specific to the target hardware. We demonstrate that our code is not only significantly more flexible, simpler to write, and more concise but also that the compiled results perform as well as state-of-the-art implementations on any of the tested CPU and GPU platforms.

2.
Pérard-Gayot, Arsène, Kalojanov, Javor; Slusallek, Philipp
GPU Ray-tracing using Irregular Grids
Computer Graphics Forum,
May 2017

Abstract: We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non-hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the "teapot in a stadium" problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes.

1.
Weier, Martin, Roth, Thorsten, Krujiff, Ernst, Hinkenjann, André, Pérard-Gayot, Arsène, Slusallek, Philipp; Li, Yongmin
Foveated Real-Time Ray Tracing for Head-Mounted Displays
Computer Graphics Forum
October 2016

Abstract: Head-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 * 1464 per eye within the the VSync limits without perceived visual differences.

Export as:
BibTeX, XML