Adaptive Spatial Sample Caching
Despite tremendous progress in the last few years, realistically rendering 3D scenes with advanced shading and particularly global illumination is still extremely difficult to perform at interactive rates. Precomputation techniques can reduce the cost during rendering, however, they typically require long preprocessing times and high storage requirements. We propose a novel world-space sample caching approach for walkthroughs of static scenes that does not require precomputation and relies instead on aggressive caching. During run-time, pixelsized patches projected adaptively onto the tangent space of visible triangles are used to store the results of shading computations. Patches are organized in a cache, which only requires a small and fixed memory footprint. In subsequent frames these patches can be retrieved, thus exploiting frame-to-frame coherence. In contrast to previous caching methods, the presented technique is extremely easy to implement, and requires only a few dozen lines of code. This caching mechanism can reduce the number of secondary rays for subsequent frames by more than an order of magnitude for moderate camera movements.