Smooth shadows for Geometry Dust CLouds

EDIT: Further improvements

 


 

What my super early prototype of the “always on the road, always driving” gameplay looks like in gif form right now (not many skills or fun yet, but it works, not the newest shadows in this gif yet)

zbczy9z

… but now let’s return to topic!

 

Hi guys,

Today I want to talk about a little rendering feature I built lately in a few spare hours. The problem presented itself like this: The dust clouds from the car chases are really thick, but they cast no shadow, which is unrealistic and gives less sense of volume. I’ll use smoke and dust interchangeably in this post. I’ll try to make this as easy to understand as possible.

DustCloudShadows

So I implemented my own solution to have shadows which needed to be:

  1. Somewhat plausible
  2. Soft
  3. With alpha value (not just binary “There is a shadow” or “There is no shadow”, because the smoke/dust is not opaque either.
  4. Performant.

We have some really thick dust clouds. Actually I don’t even use particles, but straight up geometry.
Very similar actually to the explosion effect I adopted, modified and exported to hlsl from the original (brilliant) WebGL sample found here: https://www.clicktorelease.com/blog/vertex-displacement-noise-3d-webgl-glsl-three-js

Follow the original creator for the WebGL implementation here: https://twitter.com/thespite

explosion_fx

The smoke/dust effect works similar in many ways but different in others, for example it is affected by sun light and environmental lights and uses different normals for light calculation. It has much lower resolution/vertices, obviously. Here is an older gif where the red enemies circle around the player. It shows the dust clouds pretty well.

ai

But it is still uses the same principle of the explosion effect:

We take a sphere (projected as a circle below), ideally a isosphere (where triangles have the same size) and we create a noise map (which can be animated, for more info just check out the original tutorial in the link above) and we extrude the vertices along their normals (outward facing vectors) by the value we read for their position in the noise map.

image

Great!

Shadow Implementation

Prolog

As my smoke and dust are basically deformed geometry spheres I tried using the basic shadow algorithm for them.

Not only did it not look convincing there was a severe flickering/temporal stability problem: My dust clouds were almost transparent/gone (alpha close to zero) but the shadow was still 100% black. When the dust then faded out completely the shadow suddenly disappeared.
Obviously this looked really bad so I knew I had to implement some sort of alpha dependent shadowing.

Usually for volumetrics and the sorts calculating the amount of light that traverses through is pretty complicated, but most importantly mainly implemented for particles (not talking about offline raytracing, real-time graphics it is).

So in my case where I have several somewhat transparent spheres behind each other there is no solution in any realtime engine which I could try to adapt.

My first approach was pretty naïve but followed the right idea I think. Keep in mind that I am not a professional and just started out with 3d programming roughly one month ago in my free time.

First Implementation

So I want to know how much light passes through my smoke/dust sphere.

image

So what do we need for our shadow calculation? We need to know

  • The general alpha (transparency) of the dust/smoke. If we have 0.5 alpha, only half of the light can pass through.
  • The amount of time (or distance) the light has to travel through the volume sphere, since we assume the smoke/dust is evenly distributed light that goes right through the middle will lose most of its strength, whereas light that only touches the outer rim of the sphere is almost preserved completely.

The next step is to determine how to store and read our shadow.

Usually a shadowMap is a simple depth map from the light’s perspective. For example it looks like this: (Click the picture to go to Riemer’s XNA tutorials, they are great!)

All objects that cast a shadow are rendered from the lights perspective, but only their depth value is stored. You can see above the closer the object the darker it is.

In the pixel shader we transform the current pixel to look at it again from the light’s perspective. We can read the depth of this pixel, which is correspondent to the distance to the light basically.

Then, if we compare this value to the value we stored in the shadow map at that position we can determine whether the pixel is to be darkened or not – either the depth is bigger than in the depth map (-> in shadow) or it’s closer (-> no shadow).

If it’s in shadow that basically means this pixel will not receive any light information from this light.

So my first idea was:

Render the smoke/dust spheres twice for the light: Once the normal way with backside culling (Red: Only draw triangles that praise the sun) and once with frontside culling (Blue: only draw geometry that faces away from the sun).

The procedure to determine the depth is then pretty obvious: Just take the difference (Green Arrow: (coincidence? I think not)) and you know how long the light had to travel.

image

There is an obvious first problem: What if our smoke sphere is half in the ground? Then we just take the difference between the ground pixel and our frontface.

image

Some questions remain.

What if we have one smoke sphere in front of another one? Just take the closest one for the frontside and the fartherst one for the backside, since the light has to travel through both (Use depthstencilmode less or greater).

How do we store alpha? Ah yes.
We know the distance the light has to travel through the geometry but we don’t know how transparent the smoke is, but we can just calculate the value and just make the shadowMap have one more channel where we store this information.

So what do we do in the actual rendering now?
We calculate the lighting value for a given pixel. If it is already in shadow, do nothing. If it got at least a little bit of light check –> is it further from the sun than the pixel in the frontfacing (red) shadowmap? If yes – it has to be darkened (in shadow)!

How much? Well first of all we check whether the pixel is further than the backside of the smoke as well. If it is – our shadow is (alpha * difference_between_red_and_blue)).
Otherwise the shadow is (alpha * difference_between_red_and_pixel)).

Ok. So what works with that approach?

  • The shadows fall off correctly to the side of the sphere and generally look good in isolated cases.
  • Because our shadow becomes weaker to the sides we don’t need any filtering and/or high-resolution shadow map since we don’t have hard edges!
  • Smoke can overlap, we don’t care too much.

What doesn’t work then?

  • We render the smoke twice. Not great.
  • Let’s say the sun is low. It’s light passes through the front face of the dust clouds of a vehicle close to the sun. But then it goes further and finds a dust trail from a different vehicle further back. The difference between the frontface and the backface is now enourmous. The shadow is instantly deep black, even though both dust clouds themselves are fairly transparent, but our algorithm assumes it had to pass through a 100m dust cloud.

Second Approach

Ok so remember how the smoke/dust geometry are always basically spheres? Yea, let’s use that.

We can calculate the depth at each point with the most basic formula in computer graphics.

NdotL.

We take the dot product of our normal and the light direction and we know how far the light has to travel to reach the halfway point.

ndotl

This is great. We know the depth of the sphere basically without any real performance cost. So we can save a lot of performance by only creating one depthBuffer from the frontfaces again (which we store in our red channel) and the alpha*(NdotL*sphere_diameter) (which we store in the green channel).

Great.

Or is it?

image

Looks pretty acceptable here… but

image

What is this? A hole in our shadow?

Turns out the second shadow has a hole there because there is an almost completely transparent piece of dust in front of it. And we only store the depth value of the volume closest to the light source, so we just store the information of the dust cloud in front, which has all alpha values very close to zero already. Hmmm.

Plus another problem: We can see the “sphere” like nature of the clouds very clearly on the ground. Because of rendering order we can also spot white lines between the shadows. Potentially darker shadows are discarded because a bigger sphere is in front and therefore overwrites the other one, even at the edges, where the shadow is almost zero.

We can combat the second problem a bit by blurring the shadow map a little, but the problem remains. (I just blur the green/alpha values once horizontally/vertically, just like a basic bloom/blur, I wish it could work like that on normal shadow maps :). The red depth map values must stay sharp, but we push them outwards a little bit so we don’t have visible seams after the alpha blur)

shadowBlur

You can see the blurred shadow map below. Red is the depth and Green is the alpha value we calculate. You can clearly see how the smoke fades out towards the end where green gets less and less. But you can also make out the sphere-like nature of the dust clouds.

image

What is the solution?

Blending. We blend our values when rendering, but only for our green channel. I simply created a new BlendState (this is how it works in XNA/Monogame, but it’s really the default Dx11 stuff)

_blendStateSmokeShadow.ColorSourceBlend = Blend.One;

_blendStateSmokeShadow.ColorDestinationBlend= Blend.BlendFactor;

_blendStateSmokeShadow.ColorBlendFunction= BlendFunction.Add;

_blendStateSmokeShadow.ColorWriteChannels= ColorWriteChannels.All;

_blendStateSmokeShadow.BlendFactor = new Color(0, 0.5f, 0, 0);

What this does is basically to only blend the green values together. Yay!

image

Ok. We got what we wanted. If you are still with me – here is a short video

What else?

Let’s talk performance maybe.frame_shadows

You may be shocked to see that my FPS drop from 616 in this scene to 450 with the shadows enabled, but the cost overall is pretty fine in my opinion, because the 0.6ms increase for the rendering is really not much for the visual improvement I feel.

Without the blur the FPS increase to 462. Not significant. Plus the whole thing is not really optimized yet. I think I can make it more performant in future. But obviously this will be a toggle-on-off option in the game for weaker systems.

720p and a resolution of 512×512 for the smoke/dust shadowMap.

Thanks for reading :)

——————————————————————–
If you liked this kind of article I put something similar up in video form for my grass (pretty old video, grass is better now)

Older articles:

April 02 2016
Mar 17 2016

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s