What have we done.

Technology and the future seen from another point of view.

Full HD SD

The story behind.

Instead of just telling you about what inspired us to create this scene we would like to bring you the story behind it for a change. It goes like this...

Our story starts at on 2019. It was the most peaceful time the world had ever seen. The greatest minds of the time were all striving to take information technology to the next level. This is how beautiful our modern world was before the apocalypse happened.

Generated using environment mapping and a perfectly reflective material on the sphere.

Years after that came the apocalyptic war that took place between the technology giants that we ourselves empowered. Nothing is left from these beautiful and modern cities.

We ended up with desert all around where robots where the only species to survive. Now they are fighting with each other because they learned how people destroyed themselved for more power and they are scared it will happen the same within them.

The buildup.

After having a rough concept about the scene we wanted to create, we opted for some of the most popular online resources to find some free to use 3D models.

Before starting to build a scene in out ray tracer we needed some quick compositions of our 3D models so we could choose how to place each of the models in our scene. We used blender to go through some iterations and finally came up with the scene we present here.

To be able to use our models in our ray tracer we exported them as individial OBJ models. We also edited the .mtl files to adjust different properties of the objects as they are interpreted differently in our model compared to Blender.

Environment mapping

To make our scene complete we needed it to be placed in an environment. In our case we wanted all the action to be happening at night, with a starry sky for the background. To achieve this we needed to do envoronment mapping.

We both worked on environment mapping. We created a cube of "infinite size" with one texture image for each side of it. We say the cube has "infinite size" because we only intersect it when no intersection is found and the coordinate mapper is only based on the direction of the ray to find which face and which point in that face was hit.

The implementation of the environment mapping can be found in

  • rt/materials/environment.cpp
  • rt/solids/environment.cpp
  • rt/coordmappers/environmental.cpp

Normal Mapping

In order to give the impression that our 3D models were more complex than they really are, we needed to implement normal mapping in order to create and use the normal maps providen by the artists to do so.

Skender implemented normal mapping as an extension of the bump mapper already present in the freamwork. The components of the normal vector are stored in the RGB channels of the normal map. A tricky procedure was taking this vector from the texture space to the world space which we achieved by using a Tangent Space Transformation. The loader module already part of the freamwork was also modified to let us specify the normal mapp while loading the model.

The implementation of the normal mapping can be found in

  • rt/loaders/obj.cpp
  • rt/primmod/bmap.cpp

Emission mapping

To give even more interest to our scene, we decided to search for models that would have some parts that emitted light. Given that the scene is placed at night we thought this would make it more interesting.

Enea implemented emission mapping by adding an aditional material to the composed aterial that almost all of our models use. To do this, he modified the material loader already present in the network. The aditional material is added as a Lamabertian material that emits a light the color of which we get from the emission texture and does not defusse the light.

The implementation of this integrator can be found on rt/loader/objmat.cpp.


Given that we wanted to use an Area Light (which comes with the necessity of using supersampling) and Volume Rendering, we had to optimise the freamwork as much as we could. For this reason we implemented our BVH using SAH.

Skender implemented BVH with SAH which brought down the rendering time significantly.

The implementation of this integrator can be found on rt/groups/bvh.cpp.

Specifications and benchmarks of the image generation!

Generated on machine with 8 virtual cores
Time required for thumbnail image generation : 15 minutes
Time required for full HD image generation : 3 hours 45 minutes
Number of samples per pixel : 100
Ray marching steps : 50

Group 1A

Enea Duka

Skender Paturri