Ghost in the Omphalos

Computer Graphics 2021 Rendering Competition, Saarland University

Full image Small image


Titled 'Ghost in the Omphalos', our scene is inspired by the cyberpunk anime 'Ghost in the Shell', where ghost refers to the essense of oneself. Omphalos is the greek word for 'navel', which is the concave underside of Cloud Gate ("The Bean"), a public sculpture by artist Sir Anish Kapoor. The Bean with its encapsulated reality takes the center stage of our rendered scene, with a human in its simplest form staring at it from the outside. Despite The Bean beingly seemingly close, human presence is so insignificant in its reflection, being towered by skycrapers and made more void by the gleaming lights at night. Our work examines the ambiguities of the distorted reality, provokes thoughts on humanness and the immaterial materiality in a futuristic setting.

We used OpenMP's dynamic multithreading to spread the rendering load over the ten cores of an Intel 10900X CPU. We rendered, on average, 1200 samples per pixel for the 1440p image, taking 8 hours. Our scene used 1.3 million triangles.



What makes our path tracer?


16 BRDF samples

Cook-Torrance material

The Cook-Torrance material model aims to model the specular effects we see on different materials in the real world. It identifies the main contributing factor as the minuscule microfacets that make up the material's surface. Therefore, three terms define a Cook-Torrance material model: the distribution of the microfacets, the geometry or shadowing of the microfacets and the Fresnel effect. To match Cycles, we combined the GGX microfacet distribution and geometry terms with the unpolarised dielectric Fresnel term covered in lectures. For diffuse reflections, we used the simple Lambertian model. We went beyond the combined material implemented in the assignments and used Multiple Importance Sampling to combine specular and diffuse reflections, reducing the number of fireflies due to their often wildly different distributions.


16 BRDF samples and 16 light samples combined with MIS

Light sampling

We built a light sampling structure on top of our groups and primitives. Using this acceleration structure, we can sample between hundreds of thousands of emissive triangles for each reflection with a logarithmic performance cost. To keep our method fast and simple, we only consider the emission of each triangle in this step. These emissive triangles also support emission textures, further improving upon the area lights we previously implemented in the assignments.


16 BRDF samples and 16 light samples combined with MIS. Each light sample chosen from 32 candidates

Weighted Reservoir Resampled Importance Sampling

Inspired by ReSTIR, we used a combination of Weighted Reservoir Sampling and Resampled Importance Sampling to boost the quality of our light sampler. We check many candidate light samples but only trace a ray towards one that is close to and has an ideal angle with the reflecting surface. This process is probabilistic and, due to the nonlinearities in Multiple Importance Sampling (dividing by weighted pdf), introduces some bias into our output; however, in practice, this is unnoticeable even with only a handful of candidates.


How do we deal with colour and noise?


Uniformly noisy image, thanks to adaptive sampling

Adaptive sampling

We store the mean luminance and the mean squared luminance of our samples, and along with the sample count, use it to estimate the standard error of our image. As we perceive (and tone map, for that matter) luminance logarithmically, we divide our error estimate by the mean luminance yielding an estimate of the relative error. We use this estimate to proportionally distribute samples in the next round, adaptively increasing sampling in perceptually noisier regions.


Gamma corrected image

Gamma correction

The sRGB colour space accounts for our perception of luminance by mapping RGB values nonlinearly. Although such mapping results in a more perceptually uniform and, therefore, more space-efficient image format, we need to gamma correct the output of our renderer before saving it to PNG files. Forgoing this step would result in artificially dark images.


Tone mapped image

Tone mapping

Even with gamma correction, PNG files only store a limited dynamic range. To avoid clipping artefacts and give a better visual impression of brightness, we tone map our images using the ACES tone mapping curve.


Environment map

Environment maps, emission textures

As we use PNG environment maps and textures, we needed to inverse gamma correct, and inverse tone map them for use in our renderer.