Deferred Engine Progress part2

So a few basic features made it in. I guess I go in the order of implementation

deferred_stages

In the .gif above you can see the renderTargets for the rendering

  • Albedo (base texture/color)
  • World Space Normals
  • Depth (contrast enhanced)
  • Diffuse light contribution (contrast enhanced)
  • Specular light contribution (contrast enhanced)
  • skull models for the “hologram” effect (half resolution, grayscale – saved only as Red)
  • composed deferred image + glass dragon rendered on top

Variance Shadow Mapping

So yeah basically I have only used PCF shadow mapping earlier, but since this was a nice sponza test scene I really wanted to have very soft shadows.

A possible solution for my needs are Variance Shadow Maps, here is a link to a nvidia paper about it: http://developer.download.nvidia.com/SDK/10/direct3d/Source/VarianceShadowMapping/Doc/VarianceShadowMapping.pdf
and the original paper/website by Donnelly and Lauritzen
http://www.punkuser.net/vsm/

Find detailed description of the process in these papers. Short idea: Store both depth and depth squared in the shadow map. We can use them to calculate variance:
σ2 = E(x 2 ) – E(x) 2

Chebyshev’s Inequality states that

P( x >= t) <= pmax(t) = σ2 / (σ2 + (t-mu)2)vsm shadowmap

Which, basically means – the probability that x is greater than t must be smaller than pmax which is where we can use our variance. In our case x would be the pixel depth and t would be the depth stored in the shadow map.

So the cool part is that with calculating this number we don’t have an binary yes/no shadow, but a gradient.

The biggest benefit of VSMs are that they can be blurred at a texture level(because we are not dealing with binary responses) and that is much cheaper than sampling a lot of times at the lighting stage (which we normally do).

vsm_shadowMapping

Another little cool trick is that we can offset the depth a little when drawing transparent/translucent meshes. That way the variance will be off a little bit for the whole mesh and the shadowed pixels will never be fully in shadow.

VSMs have their own share of problems, namely light leaking and inaccuracies at the edges when a pixel should be shadowed by multiple objects. This problem gets worse with the transparency “just shift around some numbers” idea, but meh it works and most of the time it’s good looking.

 

Environment Mapping (Cubemaps)

So I am pretty happy with the lighting response from the models in the test scene. However; the lighting was still pretty flat. I don’t use any hemispherical diffuse this time around so basically all the colors come from the light sources.

This is ok for some stuff, but when I added some glass and metal (with the helmets, see below) I knew I needed some sort of environment cubemap.

So I just generate one at the start of the program (or manually by the press of the button whenever I want to update the perspective).

I basically render the scene from the point of view of the camera in 6 directions (top, bottom, left, right, front, back) and save these 6 images to a texture array (TextureCube).

I use 512×512 resolution per texture slice, I think the quality is sufficient.

I can update the cubemap every frame, but that basically means rendering 7 perspectives per frame and updating all the rendertargets in between (since my main backbuffer has a different resolution than the 512×512 I use for the cubemap) and I only get around 27 FPS. Keep in mind the whole engine is not very optimized (using large, high precision renderTargets and expensive pixel shader functions with lots of branching and no lookup tables etc.)

When creating the cubemap I enable automatic mip-map chain generation (creating smaller downsampled textures off of the original one –> 256, 128, 64, 32, 16, 8, 4, 2, 1) which I will use later. Note: Because of Monogame/Slim.dx limitations I cannot manually create the mip maps and have to go with simple bilinear downsampling. If I had manual access I would use some nice gauss blurring (which would be even more expensive to do at runtime).

When it comes to the deferred lighting pass I add one fullscreen quad which applies the environment map to all the objects (Note: Engines often apply environment mapping just like deferred lights with sphere models around them)

Depending on the roughness of the material I select different mip levels of the environment map:

cubemap_roughness

Note: I noticed a lot of specular aliasing with very smooth surfaces. I implemented a sort of normal variance edge detection shader to tackle the issue, but it wasn’t very sophisticated. The idea was to check the neighbour pixels and compare their normals. If there was little difference, no problem. But if they faced vastly different directions then I sampled at a higher mip level to avoid aliasing.

 

Hologram effect on helmets

I got inspired by this concept art by johnsonting (DeviantArt)

The helmets on these guys have an optical effect which overlays some sort of skull on top of their visors which makes them look really badass in my opinion.

Here is a closer look

So my basic idea was to render skulls to a different render target and then, when composing the final deferred image, sample this render target at pixels with a certain material id (the visor/glass).
With some basic trickery I made this pixel look. Note that I do not need to render the skulls at full resolution since they will be undersampled anyways in the final image.

First attempt: (click on the gif for better resolution)

holoHelmet2

Later through trying some gaussian blurring I found that the soft, non-pixelated look, is also interesting.

Here I show them both:

skull_rendering

It was hard finding appropriate free models for the helmets, but eventually I found something usable from the talented Anders Lejczak. Here is the artist’s website http://www.colacola.se/expo_daft.htm

:)

Some other ideas floating around in my head

Hybrid Supersampling

maybe use double (any 2^multiplier) resolution for the gbuffer. Should be relatively cheap since there is almost no pixel shader work.
Then use nearest neighbour base resolution of the g-buffer for lighting calculations.
Upsample the lighthing with depth information (bilateral upsampling) to the double resolution.
Downsample the whole thing again. Have antialiasing. Success? Need to try that out.

Light Refractions/Caustics for Glass Objects

For the glass dragon it would be nice to have light refraction/caustics behind the model on the ground. But we can’t use photon mapping or sorts.

We know from the lights perspective with normals/refraction/depth where the light projection pixel should end up, but we can’t do that in a rasterizer, we can only write to the pixel we started out with. BUT we can manipulate the vertices.
Conveniently the stanford dragon has almost a million of these.

-> maybe: From the lights perspective displace the vertices of the model by the correct amount given refraction and the normal information (like you would pixels at the corresponding positions). The depth/amount depends on the depth buffer (shadow map).
-> this distorted model is then saved in another shadow/depth map (plus depth information), could be in .gb of the original shadow map.
-> reconstruct light convergence/caustics during light pass with the map
-> possible?

 

Advertisements

One thought on “Deferred Engine Progress part2

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s