Here is a list of all the features our raytracer uses:
We implemented normal maps to give all of our models more detail without adding more triangles to them. All the textures we used came with normal maps. That is why we decided to not use the already existing bump mapping technique in our raytracer. With normal maps the RGB values of each pixel tell the triangle in which direction the normal should point.
The implementation is similar to bump mapping. For details see rt/primmod/normalmap.cpp.
In order to speed-up the process of rendering we implemented mutlithreading. We dynamically look up how many threads are available on the system. Every thread searches which pixel in the image was not processed yet. Then the thread starts rendering this pixel and writes it to the image. When all the pixels in the image have been processed the thread terminates. This way no thread will ever be idle and we can use the full power of the CPU. For the implementation see rt/renderer.cpp.
We created our own glass implementation that doesn't use random to figure out which ray should be sampled. Our glass just returns both rays to the integrator. This way the rendering process takes longer, however the resulting image doesn't look noisy anymore. In order for this special glass material to work properly we had to create a new integrator. For implementation details see rt/integrators/custom.cpp and rt/materials/ourglass.cpp.
Our original scene was too dark and we solved this problem by implementing gamma correction. When loading a texture you can specify wether it already has gamma correction in it or not.
Normally all of our diffuse textures are already gamma corrected so we need to ignore them when computing gamma correction. Normal textures obviously don't need any gamma correction and should not be altered.
In the end after rendering is done we go over every pixel of the image and use the gamma correction formula to get the final pixel color. For the implementation details see
rt/textures/imagetex.cpp and core/image.cpp.
Arealights produce noise. We tried to use filters to reduce this noise since they are easy to make and also very fast. However they are not really able to remove all the noise without losing information. So we ended up not using any of the filters in the final image. We implemented two filters: Gaussian blur and Edge detection. We approached Gaussian blur in two different ways. The first one was a simple implementation that just runs with a matrix over every pixel. The second one is called Lanczos filter and works similar to the first approach. The only difference is that lanczos does not have fixed values for every element in the matrix but instead uses a sinc function. The filters where implemented in core/image.cpp.