Shadow Filtering For Pointlights

sm_new_title

Dear reader,

I want to write a bit about PCF filtering for omnidirectional lights (see how i just rephrased the title?) because I think this is a desirable thing to implement for a lot of beginners (like me), but I think it has not been written about as properly as I’d like. I would be very happy if you – the reader – would post links of good tutorials in case I missed them.

Introduction

Shadow Filtering in General

Should you be looking for different shadow filtering algorithms, you have come to the wrong place – the right place would be here:
https://mynameismjp.wordpress.com/2013/09/10/shadow-maps/

MJP provides a sample solution with numerous different ways to properly filter your shadows, highly recommended to check them out (you can find the shader code in this file on github).

Shadow Projection

Most of the tutorials/papers about shadow filtering have spot lights or directional lights as the source of shadow maps, therefore they just have to sample a simple 2-dimensional texture.

However, projecting the view of an omnidirectional light source onto a simple texture is not trivial at all. One can do that for example with Parabloid or Dual-Parabloid projection.

A great resource for that is http://graphicsrunner.blogspot.de/2008/07/dual-paraboloid-shadow-maps.html

You can see one side of such a projection to the right (taken from the link)

This sort of projection comes with a number or problems:

  • The texel-per-mesh-area covered ratio is high in the middle and very low at the edges
  • You will experience edge seams that need covering up
  • You waste a lot of memory with the black areas ( 1×1 – 0.5×0.5xPI = 21,46%)
  • Filtering is not trivial

So this approach is often ignored and instead cubemap projection is chosen (both for shadow mapping as well as environment mapping).

There is a great tutorial for that here:
https://learnopengl.com/#!Advanced-Lighting/Shadows/Point-Shadows

The basic idea is to sample the scene 6 times, each with a different orientation of the camera (Cubemaps: wikipedia).

Even better: DirectX and OpenGL combine these 6 textures into an array (TextureCube) and can read hardware-filtered texels with a simple direction vector as an input.

That makes reading out the shadow trivial –> you can simply sample the texture with the vector from the light to the pixel’s position.

sm_middle

 

Filtering a cubemap

The good part about cubemaps is that they can be bilinearly filtered by default. That means that if you sample a pixel at the very edge of one texture side, it will be compared with a pixel from the texture next to it.

However, for shadow filtering, we have to compare depth values from neighbors, we don’t just want to blend them. We also want to compare more than just neighboring pixels for a softer appearance. (Note: we could do that with Gather() instead of Sample(), but that would only give us 4 texels)
sm_blocky

The tutorial (learnopengl.com) I’ve linked to deals with that by giving the sampling vector (pixel to light direction) some offsets in all directions, to get some compare values and smooth the results.

I’ll copy the code just to make clear how it works, but I think the illustrations below help, too.

vec3 sampleOffsetDirections[20] = vec3[]
( vec3( 1, 1, 1), vec3( 1, -1, 1), vec3(-1, -1, 1), vec3(-1, 1, 1),
vec3( 1, 1, -1), vec3( 1, -1, -1), vec3(-1, -1, -1), vec3(-1, 1, -1),
vec3( 1, 1, 0), vec3( 1, -1, 0), vec3(-1, -1, 0), vec3(-1, 1, 0),
vec3( 1, 0, 1), vec3(-1, 0, 1), vec3( 1, 0, -1), vec3(-1, 0, -1),
vec3( 0, 1, 1), vec3( 0, -1, 1), vec3( 0, -1, -1), vec3( 0, 1, -1) );

float shadow = 0.0;
float bias = 0.15;
int samples = 20;
float viewDistance = length(viewPos – fragPos);
float diskRadius = 0.05;
for(int i = 0; i < samples; ++i)
{
float closestDepth = texture(depthMap, fragToLight +          sampleOffsetDirections[i] * diskRadius).r;
closestDepth *= far_plane;
// Undo mapping [0;1]
if(currentDepth – bias > closestDepth)
shadow += 1.0;
}
shadow /= float(samples);

Here is an old screen shot i took when i used this technique (don’t mind the bad looking colors etc.)

sm_old_soft

This exhibits a great number of issues and they all come from the simple fact that we use cubemaps, which will become apparent soon.

I try to visualize how this implementation works and how it can be improved.

For this animation I chose a top-down view and 4 offset vectors, all of which are part of the full array.

sm_sampling1

You can already see a problem here, where the –y vector and the +x vector sample almost the same point.

Another obvious problem arises in this scenario:
sm_sampling2

In this case we sample the same texel 3 times without any information gain.

I made a close up of the actual shadows and you can see how the problem manifests:sm_old_artifacts2
Note how the offsets between different shades is very inconsistent.

This can be helped by calculating normal and binormals of the sampling vector and using them instead, however it won’t help with what we really want, since we can still skip texels or sample a texel numerous times with bad configuration and offset size.

We really want to read out the neighboring texels instead of just trying to change our vector a bit and hoping we hit a good sampling point. This is also the way to ensure we can use smoothing on our edge-taps (more on that later)

This brings us to a major flaw with the basic TextureCube element in hlsl. We can only sample it with a vector3 input, but we do not know the sampled texel’s position in the array nor do we have easy access to it’s neighbors (for example Texture2D could use SampleOffset(), or simply Load()).
(Noteworthy: we can use the texture array instead, read here )

One could use the dot product to get the sampling vectors to behave correctly and always snap one texel to a certain direction, but edges would have to be treated seperately since the offset does not match to the texel of the next texture accurately. This is probably a much easier solution than what I did, but I didn’t think of it at the time.

However, when I red this thread here https://www.gamedev.net/topic/657968-filtering-cubemaps/
It seemed to me like cubemaps aren’t really what I want.

Not using Cubemaps

It was suggested to fit all 6 texture maps onto one big texture instead and then manually create a conversion function that would return the accurate texture coordinate from any given 3d vector.

So that’s what i did, and it might be a solution for you, too.

sm_result1
You can see the 6 shadow maps on the left in one big texture strip.

An excerpt from my conversion function

//vec3 doesn’t have to be normalized,
//Translates from world space vector to a coordinate inside our 6xsize shadow map
float2 GetSampleCoordinate(float3 vec3)
{
float2 coord;
float slice;
vec3.z = -vec3.z;

if (abs(vec3.x) >= abs(vec3.y) && abs(vec3.x) >= abs(vec3.z))
{
vec3.y = -vec3.y;
if (vec3.x > 0) //Positive X
{
slice = 0;
vec3 /= vec3.x;
coord = vec3.yz;
}
else
{
vec3.z = -vec3.z;
slice = 1; //Negative X
vec3 /= vec3.x;
coord = vec3.yz;
}
}

// … other directions, Y, X
// a possible precision problem?
const float sixth = 1.0f / 6;

//now we are in [-1,1]x[-1,1] space, so transform to texCoords
coord = (coord + float2(1, 1)) * 0.5f;

//now transform to slice position
coord.y = coord.y * sixth + slice * sixth;
return coord;
}

You can find the full code in the github solution, this is just to give you a rough idea of how it works.

Looks like my code is working!
sm_result2

sm_texelsIt’s still blocky, but we could easily apply the offset-vector code and it would show the same results as with cubemaps.

But, since we are dealing with 2d texture coordinates now, we can plugin any code for shadow filtering (for example: PCF).

However, there is still a problem –> what if we are at the very edge of a texture block and want to sample the right neighbor? We need to have another function that checks for offsets. If the textureCoord + the offset are out of the current projection we have to translate that to some other texture coordinate.

This, of course, can sometimes be troublesome, because going to the right on our topview might be going down on our left side view.

Edge Tap Smoothing

Working with texels allows us to have very smooth transitions, since we know “how far into” the texel we are sampling.

For example let’s assume we have a center sample and one sample with
a 1 texel offset.

sm_edgetapIf we are on the very left of the center texel our right sample is also at the very left and should therefore not have a lot of impact on our final result, since we “cover” only a very very small area of the right sample.

If our center sample is at 0.5 our right sample is at 0.5, too and therefore we cover half of the texel’s area, so we weight this sample with 0.5

You can see a simple illustration of this on the left side, but I admit it might be a bit misleading with the 0 and the 1 values.
Plus it’s not simply adding the colors, but you also have to divide by the total weights, so for 0.5 of red it would be (yellow + 0.5xred) / 1.5f
Regardless here is the actual result on shadow maps (3×3 samples for PCF)

sm_pcf

Result

Of course, by increasing the amount of samples we can get smoother results, but there really is a limit on how soft our shadows can become with PCF before performance tanks.

sm_result3
For soft shadows, perhaps VSM or ESM maps might be a better choice (i implemented them originally for spot lights here: Deferred Engine Progress pt2)

With PCF working correctly one could also implement Percentage Closer Soft Shadows (here is a paper from nvidia: Percentage-closer Soft Shadows) and i might explore that in the future, but the implementation is pretty expensive by default.

So yeah, I hope you liked the read. I’m not a pro or anything, so advice and tips are greatly appreciated.

You can find the project here: https://github.com/UncleThomy/DeferredEngine

Advertisements
Tagged , , , , , , , , , ,

2 thoughts on “Shadow Filtering For Pointlights

  1. Bala says:

    Hi Buddy, This is Super Awesome! I love your work :)

    -Praptobala

  2. […] detailed in this blog entry, I changed my point light shadows to have better filtering. Right now I support deferred point and […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: