Friday, July 29, 2011

White balance

When implementing the fog mentioned in the previous post, I observed a weird thing happening: the fog wasn't white, as I expected, but it had a dirty Beige tint making it look a bit like a smog or something. But since the implementation didn't use different absorption and scattering coefficients for RGB components, and thus the color of the sun light shouldn't have been modified, I thought it was a bug, and neglected it until most of other issues were solved.
But then, after inspecting all the code paths, I came to the only conclusion that the computation is right and the problem must be in the interpretation. So I tried to convince myself that the fog must be white, and the tint actually isn't there. Almost made it, too.



But the machine coldly asserted that the color wasn't white as well. Didn't bother with any hinting as to why, though.
Apparently the incoming light that was scattering on fog particles was already this color, even though the sun color was not modified in any way, unlike in the previous experiments.

Interpretation?

The thing is that sunlight really gets modified a bit until it arrives to the planet surface. The same thing that is responsible for blue sky causes this: a small part of the blue light (and a smaller part of the green light too) gets scattered away from the sun ray. What comes down here has a slightly shifted spectrum.
But how come we see the fog white in real life?
Turns out, everything is fake.

The way we perceive colors is purely subjective interpretation of a part of the electromagnetic spectrum.
And as it is easier for the brain to orient in the environment when the sensors don't move, it is also simpler to stick with constant properties on objects. Our brain "knows" that a sheet of paper is white, and so it will make it appear white in wildly varying lighting conditions. This becomes apparent when you use a digital camera without adjusting for the white color - the results will be ugly.

So basically that's why we have to implement an automatic white balancing, at least until we all have full surround displays and our brains magically adapt by themselves. By the way, playing in fullscreen in the dark room with uncorrected colors slowly makes it adapt too.




Implementation

Our implementation tries to mimic what the perception actually does. By definition, a white sheet appears to be white under a wide range of lighting conditions. So we are running a quick computation that uses the existing atmospheric code on GPU, that computes what light reflects off a white horizontal surface. The light has two components - sun light that reflects at an angle and its illuminative power diminishes as the sun recedes from zenith, and the second one is the aggregated light from the sky. Once this compound color is known, we could perform the color correction as a post-process, but there's another way - adjusting the color of sun so that the resulting surface color is white. This has an advantage of not affecting the performance at all, since the sun color is already taken into equation.

While this algorithm doesn't mimic the human perception precisely, i.e. the actual process is more complex and depends on other things, it seems to be pretty satisfactory, though I expect further tuning.

Some of the properties: it extends the period of day that seems to have a "normal" lighting, and removes the unnatural greenish tint on the sky:


During the day it compensates for the brownish light color by making the blue things bluer. Can't say the old colors were entirely bad though.





So long, and thanks for all the fish

Thursday, July 21, 2011

Fog and dust

In addition to the existing atmospheric model that already accounts for aerosol particles in air, we have been working also on incorporating ground fog and dust. It's defined by several parameters that determine its density, light scattering properties and boundary altitude. Shaders then compute resulting attenuation and scattering of sun light for terrain and objects. The code is similar to the code computing optical properties of water, using different values and omitting the upper reflective layer.






Viewing valleys of fog from a greater distance, illuminated by evening sun.



When the amount of scattering is lowered, one gets appearance of dust. Also, thicker layers of dust/mist can cast the terrain down below into darkness


There are several things that need to be done yet - currently the fog settings act globally, covering the whole planet in a veil of mist. There's no modulation that would give the fog a nicer, non-uniform look.

Ultimately, fog (or dust) should appear dynamically according to a probabilistic model that would describe chances of it forming at a given place (climate, precipitation) on the planet in given time of day/year. Or using a real time weather report feed.

Wednesday, July 13, 2011

Alien planet Earth

Rendering our planet "alienized", using a different set of basic materials for fractal mixer, with changed parameters of atmosphere, sun and water.

Scattering of light in the atmosphere determines both the color of sky and sunsets. We can see a blue sky because the blue light is more likely to bounce off the air molecules than the green and even more than the red components of sun light. As the light from sun travels through the atmosphere above us, some of it gets scattered away from the ray and towards our eyes. The same effect is responsible for red sunsets - as the sun sets, light from it has to travel a longer way through a denser layers of atmosphere. By the time it reaches us, most of the blue and green light gets scattered away from the ray, leaving only the most persistent red component.

This effect is simulated in Outerra, and so we are able to play with it. What if the atmosphere consisted of different gases and the scattering characteristic was different?

In the following video we are showing planet Earth that was "alienized". The atmosphere in it scatters the green light best, which you can see not only on the sky itself but also on the shaded parts that are not lighted by sun but only by a portion of the sky.
The sun has got an orange shade, which you can see mainly on the horizon (the sun itself is too bright so looking at it directly saturates the color to white).

The absorption of light in the water has been altered as well - normally, the red light gets only so far in the water, when it almost entirely disappears. Here, the medium absorbs the green and blue light instead, letting the red one to penetrate into depths. Of course, since the water surface largely reflects the sky at an angle, the ocean appears to be green in the distance.

At the end there's also a short sequence with a red-orange atmosphere.


Here are some screens showing it under various settings:

http://www.outerra.com/shots/alien/alien1.jpg

Milk water & yellow skies:

http://www.outerra.com/shots/alien/alien3.jpg

Violet atmosphere:

http://www.outerra.com/shots/alien/alien4.jpg

No atmosphere (or no atmospheric scattering). This is what you'd get for example on the Moon:



http://www.outerra.com/shots/alien/alien5.jpg

Sunday, July 3, 2011

Book: 3D Engine Design for Virtual Globes

3D Engine Design for Virtual Globes is a book by Patrick Cozzi and Kevin Ring describing the essential techniques and algorithms used for the design of planetary scale 3D engines. It's interesting to note that even though virtual globes gained the popularity a long time ago with software like Google Earth or NASA World Wind, there wasn't any book dealing with this topic until now.


As the topic of the book is relevant also for planetary engines like Outerra, I would like to do a short review here.
I have been initially contacted by Patrick to review the chapter about the depth precision, and later he also asked for a permission to include some images from Outerra there. You can check out the sample chapters, for example the Level of Detail.

Behind the simple title you'll find almost surprisingly in-depth analysis of techniques essential for the design of virtual globe and planetary-scale 3D engines. After the intro, the book starts with the fundamentals: the basic math apparatus, and the basic building blocks of a modern, hardware friendly 3D renderer. The fundamentals conclude with a chapter about globe rendering, on the ways of tesselating the globe in order to be able to feed it to the renderer, together with appropriate globe texturing and lighting.

Part II of the book guides you to the area that you cannot afford to neglect if you don't want to hit the wall further along in your design - precision. Regardless of what spatial units you are using, it's the range of detail expressible in floating point values supported by 3D hardware that is limiting you. If you want to achieve both global view on a planet from space, and a ground-level view on it's surface, without handling the precision you'll get jitter as you zoom in and it soon becomes unusable. The book introduces several approaches used to solve these vertex precision issues, each possibly suited for different areas.

Another precision issue that affects the rendering of large areas is the precision of depth buffer. Because of an old non-ideal hardware design that reuses values from perspective division also for the depth values it writes, depth buffer issues show up even in games with larger outdoor levels. In planetary engines that also want a human scale detail this problem grows beyond the bounds. The chapter on depth buffer precision compares several algorithms that more or less solve this problem, including the algorithm we use in Outerra - logarithmic depth buffer. Who knows, maybe one day we'll get a direct hardware support for it, as per Thatcher Ulrich's suggestion, and it becomes a thing of the past.

Third part of the book concerns with the rendering of vector data in virtual globes, used to render things like country boundaries or rivers, or polygon overlays to highlight areas of interest. It also deals with the rendering of billboards (marks) on terrain, and rendering of text labels on virtual globes.

The last chapter in this part, Exploiting Parallelism in Resource Preparation, deals with an important issue popping up in virtual globes: utilizing parallelism in the management of content and resources. Being able to load data on the background, not interfering with the main rendering is one of the crucial requirements here.

The last part of the book talks about the rendering of massive terrains in hardware friendly manner: about the representation of terrain, preprocessing, level of detail. Two major rendering approaches have their dedicated chapters in the book: geometry clipmapping and chunked LOD, together with a comparison. Of course, the book also comes with a comprehensive list of external resources in each chapter.


We've received many questions from several people that wanted to know how we started programming our engine and what problems we have encountered, or how did we solve this or that. Many of them I can now direct to this book, which really covers the essential stuff one needs to know here.