The Breathing Lake

Concept

I came up with the concept pretty easily - I was looking to create a beautiful alpine landscape since I love mountains. The whole thing evolved over time on its own rather than us following a predetermined plan for a specific scene and features. I believe this is in parts due to the fact that everything in the scene is procedural, except for the colors and some parameters which were adjusted manually. We had some vague idea of what we wanted to have, but most of the things were added just going with the flow. I enjoy good weather very much, so a glaring sun and a nice blue sky with some small white clouds were a must. The fog also fit in very well as a distance cue, and provided a very nice transition between the forefront and the background. The lake was more of an afterthought, even if it proved to be one of the most notable elements in the scene. Stefan Lemme's suggestion to add small waves proved to work very well as a way of adding some movement to the otherwise still lake. The volumetric cloud over the lake was the final touch added to the scene. When I tried adding volumetric clouds/heterogeneous fog, most of these did not fit in very well, in the sense that one usually does not expects such weather effects with the weather conditions in the scene. In the end I opted for a small cloud over the lake which was reminiscent of a picture of Saarschleife that I had seen. That's also where the title of the scene came from since the cloud over the lake had obviously formed from the evaporation from the lake. Daniel came up with the idea of adding motion blur to the scene, in order to achieve an effect of movement, and to further focus the attention of the viewer on the forefront. He picked the movement distance very carefully, as to not have a very strong blur, but rather just a small hint of movement. As for the motivation regarding the technical part: I have always found ray-marching fascinating since it provides great freedom when it comes to the possibilities for intersection geometry. Whether it's fractals, procedural shapes, implicit functions, distance fields or participating media, it works for mostly everything. Needless to say, Íñigo Quílez's works have had a great impact on the techniques I chose to use for this scene. Initially, I even built a small test scene in shadertoy, for which I have provided an image at the bottom of this page. Before implementing the volumetric clouds, I also had to write a small simulation in shadertoy to actually get a feeling of how to tweak everything to get the desired end result, you can find that at the bottom of the page also.

Scene building

I have added some of our rendered images in chronological order, so that one can get a better idea in what order the various features were added, and how the various elements affected the scene overall.

Features

Below is a list of some of the features we have implemented.

The tags [V] and [D], which stand respectively for Vassillen Chizhov and Daniel Radke, specify who implemented the feature. [filename.cpp/h] denotes the filename in which the feature is implemented, {function/class} specifies the function/class where the feature can be found. All of the files can be found in the folder rendering_comp.

1. Terrain

2. Lake

3. Sky/Sun/Fog

4. Volumetric cloud

5. Other

Third party material and references

The hashing functions in hash12_iq0, hash11_iq0 and the fake 3d hash integrated inside the 3d value noise are from Íñigo Quílez. However these types of functions seem to be pretty generic and widely known in the demoscene community and usually no author is credited for the original idea of having a fract(bigNumber*fract(smallNumber*x)) hashing function. We are also using the random() provided in core/random.h from the course. The references we have used for this project are:

Additional comments

Both the 1920x1080 and 480x270 images were rendered with 4x4x4 jittered samples, 4 were jittered along x, 4 along y, and 4 along t. The low-resolution image was rendered in about 10 minutes and the high-resolution image was rendered in about 2 hours and 30 minutes. The images were rendered on an Intel i5 4300U. Originally we had planned to render the images with shadows, however it took about 3 hours to render the low-resolution image with shadows, since a considerably lower step-size was required for more accurate intersections, so we gave up on that idea. Below is the low-resolution image rendered with shadows (4x4x8 samples):

Here is also the initial image that I created in shadertoy as a proof of concept, before starting on the main image:

A few images of clouds that I created in shadertoy, before I implemented volumetric clouds for the main image, and a a link to a "real-time" (depending on your GPU) demo on shadertoy (may crash your browser if your GPU is not good enough): clouds

Participants