Texture (Object) Space Lighting

I have been thinking about different implementations of sub-surface scattering, and I recalled that before it was done in screen space it was done in texture space.

I’ve looked it up and found this gem from ATI in 2004:

https://developer.amd.com/wordpress/media/2012/10/D3DTutorial_Skin_Rendering.pdf

texturespace1.png

The idea is to not render to the pixels covered by the model on screen (view projection space) but instead to the texture’s uv space.

Shortly after starting to work on it, I found out that this technique is actually used in some game production right now. 

If you want to hear from the guys that actually work on this thing look at this amazing presentation from Oxide games.

http://www.oxidegames.com/2016/03/19/object-space-lighting-following-film-rendering-2-decades-later-in-real-time/

Furthermore here are some more thoughts about it

http://deadvoxels.blogspot.de/2016/03/everything-old-is-new-again.html

Think of it this way: We render a new texture for the model, and for each pixel we calculate where this pixel would be in the world and how it reacts to light / shadows etc. in the world. Then we render a normal mesh to our view projection space and apply the texture like we would with any other texture – voila – the final lit mesh is on screen!

I thought this was a very interesting way to do things, in part because it is so fundamentally different than our normal form of 3d rendering.

For the sub-surface scattering approximation we would then blur the final lighting, if you want to learn more about that – follow the link provided above.

So I decided to make a fast and dirty ™ implementation of the general idea in monogame.

Here is the result:

Benefits (naive approach)

  • Potentially the biggest benefit of this method is that the final image is very stable.
  • If we create mip-maps for our lighting texture we can reap the benefits of some 2000s magic in form of texture filtering when drawing the model. That means that effectively shader aliasing is not an issue any more.
  • Also, if we use this naive approach and render the model from all sides, we basically don’t have to render it again if the lighting doesn’t change (pre-rendered essentially), and if it only changes a little we can get away with only shading at lower framerates (not particularly useful without async rendering, which comes with dx12 / Vulkan)
  • We can also reuse this texture for copies of the model, for example if we render reflections. Since the lighting calculations are already done our reflections become very cheap. (Disclaimer: Specular lighting should in theory be updated for reflections for example, but it’s plausible enough in games not to do so – see: screen space reflections are generally accepted by players as plausible)
  • We can also enforce a shading budget that is limited by the resolution of the calculated texture.

Downsides (naive approach)

  • We waste a lot of resources on shading stuff that we can’t even see, like backfaces.
  • The rendering quality is only as good as the resolution of our texture. If we zoom way out we render too much detail, when we zoom in the shading becomes blocky.
  • With increasing amount of meshes, our memory needs increase, too.
  • We can’t easily apply screen space effects.
  • Our models cannot have tiling textures.

The road to viability

The downsides are pretty severe, so let’s address them.

First of all, we want to limit our memory consumption, so instead of each mesh having it’s own texture let’s use a bigger one for all meshes. For each mesh we now have to store where the lighting texture is placed in the big texture. We can create a new address each frame for all meshes in the view frustum.

(So for example we tell our first mesh to cover the first 1024×1024 square in our 8k x 8k texture, so it should know that it can write to / read from [0,0] to [0.125, 0.125])

Then we should obviously scale our texturing resolution per mesh depending on distance to the camera – ideally each pixel should cover one texel. It’s important the UV distribution of the model is uniform!

Then, to not draw unvisible pixels, we have to add one pass up-front where we draw all meshes with their virtual texture address as texture.

(So for example our mesh’s texture in the virtual texture sits at [0,0]-[0.125,0.125] and the pixel we draw has uv-coordinates of [0.5, 0.5] we draw the color [0.0625 , 0.626])

We can then check each pixel’s color, which is also it’s position, and mark the pixel in the virtual texture at this very position, so we know it has to be rendered. This step could be done with a compute shader, so it’s not available in Monogame unfortunately.

Finally some parts of our model may be closer to the camera to others (think of terrain for example) or some may not be rendered at all when we have huge models, so maybe splitting the model into smaller chunks or using other texturing LODs, like in more advanced virtual texturing / “megatextures” would be good.

Benefits (improved approach)

  • fixed memory and shading budget dependent on the master texture size
  • consistent quality
  • less wasteful shading, better performance.
  • We can scale our quality by undersampling and it would still look okay-ish, sort of like blurry textures.

Downsides (improved approach)

  • Becomes pretty complicated, the time from nothing to first results to first robust implementation is many times longer than for the usual approaches
    • which is why I didn’t do it. Also monogame doesn’t support compute shaders and I wanted to stick to it.
  • If virtual textures are used, the amount of implementation woes goes up by another mile
  • since we draw only visible pixels, we can’t reuse as much

 

Implementation Overview

I went for the naive implementation and got it to work in a very short amount of time.

We have to render in two passes.

The first one is the texture (object) space pass. It works basically just like normal forward rendering with a small change to our vertex shader:

Our Output.Position changes from

Output.Position = mul(input.Position, WorldViewProj);

to

Output.Position = float4(input.TexCoord.x * 2.0f - 1.0f, -(input.TexCoord.y * 2.0f - 1.0f), 0, 0);

which may seem familiar if you ever worked with screen space effects. It’s basically the mapping of texture coordinates (which are in [0,1]x[0,1] space) to normalized projection space which is ([-1,1]x[-1,1] and the y is flipped)

In the video above you can see this output texture in the bottom left corner.

In the final pass we render our mesh with a traditional vertex shader again

DrawBasicMesh_VS DrawBasicMesh_VertexShader(DrawBasicMesh_VS input)
{
DrawBasicMesh_VS Output;
Output.Position = mul(input.Position, WorldViewProj);
Output.TexCoord = input.TexCoord;
return Output;
}

We just need position and texture uv as input, since our pixel shader is even more simple and just reads the texture drawn in the pass before (a one liner:)

return Texture.Sample(TextureSamplerTrilinear, input.TexCoord);

Done.
In the video above you can see a new GUI I’ve used, you can find it here

https://github.com/RonenNess/GeonBit.UI

The implementation was super easy and quick. Check it out if you are looking for one to use in monogame!

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s