Titled 'Ghost in the Omphalos', our scene is inspired by the cyberpunk anime 'Ghost in the Shell', where ghost refers to the essense of oneself. Omphalos is the greek word for 'navel', which is the concave underside of Cloud Gate ("The Bean"), a public sculpture by artist Sir Anish Kapoor. The Bean with its encapsulated reality takes the center stage of our rendered scene, with a human in its simplest form staring at it from the outside. Despite The Bean beingly seemingly close, human presence is so insignificant in its reflection, being towered by skycrapers and made more void by the gleaming lights at night. Our work examines the ambiguities of the distorted reality, provokes thoughts on humanness and the immaterial materiality in a futuristic setting.
We used OpenMP's dynamic multithreading to spread the rendering load over the ten cores of an Intel 10900X CPU. We rendered, on average, 1200 samples per pixel for the 1440p image, taking 8 hours. Our scene used 1.3 million triangles.
We downloaded models of skycrapers, a human, a robot dog, a road segment and a bench and imported them in Blender. The mesh and texture files generated by Blender are processed by our ray tracer.
To produce a realistic image and match the output of Blender's Cycles renderer for our scene as closely as possible, we implemented a path tracer on top of our ray tracer. We implemented the Cook-Torrance material model complete with the GGX microfacet distribution to give a physically accurate look to our scene's materials. Our path tracer then uses Multiple Importance Sampling to combine the sampling strategy of this material model and the lighting in our scene.
(by Martin)
16 BRDF samples
The Cook-Torrance material model aims to model the specular effects we see on different materials in the real world. It identifies the main contributing factor as the minuscule microfacets that make up the material's surface. Therefore, three terms define a Cook-Torrance material model: the distribution of the microfacets, the geometry or shadowing of the microfacets and the Fresnel effect. To match Cycles, we combined the GGX microfacet distribution and geometry terms with the unpolarised dielectric Fresnel term covered in lectures. For diffuse reflections, we used the simple Lambertian model. We went beyond the combined material implemented in the assignments and used Multiple Importance Sampling to combine specular and diffuse reflections, reducing the number of fireflies due to their often wildly different distributions.
16 BRDF samples and 16 light samples combined with MIS
We built a light sampling structure on top of our groups and primitives. Using this acceleration structure, we can sample between hundreds of thousands of emissive triangles for each reflection with a logarithmic performance cost. To keep our method fast and simple, we only consider the emission of each triangle in this step. These emissive triangles also support emission textures, further improving upon the area lights we previously implemented in the assignments.
16 BRDF samples and 16 light samples combined with MIS. Each light sample chosen from 32 candidates
Inspired by ReSTIR, we used a combination of Weighted Reservoir Sampling and Resampled Importance Sampling to boost the quality of our light sampler. We check many candidate light samples but only trace a ray towards one that is close to and has an ideal angle with the reflecting surface. This process is probabilistic and, due to the nonlinearities in Multiple Importance Sampling (dividing by weighted pdf), introduces some bias into our output; however, in practice, this is unnoticeable even with only a handful of candidates.
To output pleasing images, we need to investigate perceptual considerations. First, the Monte Carlo noise in path traced renderings is largely uneven; we compensate for this effect using adaptive sampling. Second, the PNG file format assumes colours in the sRGB colourspace; therefore, we gamma correct our images. Finally, the sRGB colourspace is limited to 8 bits per channel, displaying a limited dynamic range; we use tone mapping to compress the linear output of our renderer in a visually pleasing way.
(by Wanyue)
Uniformly noisy image, thanks to adaptive sampling
We store the mean luminance and the mean squared luminance of our samples, and along with the sample count, use it to estimate the standard error of our image. As we perceive (and tone map, for that matter) luminance logarithmically, we divide our error estimate by the mean luminance yielding an estimate of the relative error. We use this estimate to proportionally distribute samples in the next round, adaptively increasing sampling in perceptually noisier regions.
Gamma corrected image
The sRGB colour space accounts for our perception of luminance by mapping RGB values nonlinearly. Although such mapping results in a more perceptually uniform and, therefore, more space-efficient image format, we need to gamma correct the output of our renderer before saving it to PNG files. Forgoing this step would result in artificially dark images.
Tone mapped image
Even with gamma correction, PNG files only store a limited dynamic range. To avoid clipping artefacts and give a better visual impression of brightness, we tone map our images using the ACES tone mapping curve.
Environment map
As we use PNG environment maps and textures, we needed to inverse gamma correct, and inverse tone map them for use in our renderer.
The 3D models/environment map we used are: human, chicago bean, dog, street, shanghai hdr, cab, cyberpunk car, cyberpunk car2, lamp skycraper skycraper2 skycraper3 bench low poly cab
website template