Designing A Linear HDR Rendering Pipeline

Dear valued reader,

I’ve recently realized that I should rethink and rebuild my deferred renderer’s pipeline a bit, since sometimes small artifacts cropped up when render elements didn’t adhere to the same rules.

Therefore, I’ve created a graph to get a high-level overview and orientation. Luckily I’ve gotten some help on twitter – especially from Mikkel Gjoel and Nathan Reed.

I am pretty happy with the results right now, and it might be useful to other guys starting out, so I thought I’d share.

Of course, feedback and tips are greatly appreciated :)

Overview

deferred pipeline 4
This is what I came up with. Click on the image for a fullscreen view

Linearity

A great resource on why we need to convert our color inputs to linear space can be found here: http://www.kinematicsoup.com/news/2016/6/15/gamma-and-linear-space-what-they-are-how-they-differ

In the graph above and in my text below I refer to “gamma space” as gamma 2.2 and “linear space” as gamma 1.0

pl_linearnonlinear
pl_lightColorNote: I’ve duplicated the yellowish light in the middle. You can see how color and intensity values are different for linear/nonlinear rendering. The color on the left is actually the color of the light, it clearly gets lost in gamma space rendering.

Converting colors from gamma to linear space is done by a function like this

color.rgb = pow(abs(color.rgb), 2.2f);

The other way around is to use 1/2.2f as exponent (or 0.45454545f)

IMPORTANT: This applies only to non-linear textures. If your input is already linear (texture format = SRGB) you don’t have to do this. Likewise if your output renderformat is SRGB you don’t have to convert back.

High Dynamic Range

I’ll refer to the wikipedia page for high dynamic range rendering, it gives a good overview and has lots of relevant sources provided at the bottom.

In the case of my engine it means that my lighting is effectively not limited by the 0…1 range.

Usually if you have a light that renders a pixel completely white (rgb = 1,1,1) and you add another light on top it will not make a difference, the pixel stays white. This makes dealing with lots of lights of different intensity values rather bothersome.

If we use a high-precision rendertarget format like fp16 (16bit float per color channel), we are not limited by these boundaries any more and can accumulate much more color before we hit technical boundaries.

This also means that our color range is potentially greater than what our monitors can display, so we have to bring it down again with tonemapping later on.

Breakdown

Deferred Rendering Basics

pl_deferred

This is pretty much a stock deferred renderer. If we didn’t use HDR we might as well have a finished image after the compose step.

A g-buffer is created that stores all relevant information per pixel, in my case: albedo, normals, depth, roughness, metalness and materialtype in 3 rendertargets.

The lighting accumulation buffer stores all the lighting contribution from our different light sources. (Contrary to the implication in the picture, the lighting buffer does not take into account albedo color. Sorry for the mistake)

Color and lighting get combined in the Compose pass. Note: SSAO is also added here.

For the linear pipeline we need to convert gamma for all color elements first though. We save the mesh textures “as is” in the Gbuffer, because our albedo is stored only in 32bit color (R8G8B8A8) so converting gamma beforehand will yield in information loss.

It’s important to not forget the light colors –> these need to be converted too (but we can do that before passing the light color to the shader)

Image Based Lighting

pl_environment

To give our rendering a more coherent look, especially in areas that are not directly lit, we can define one or more sample points that capture the lit scene from all directions and store this capture in a cubemap.

We can then use this to add light contribution for all our meshes, which emulates the way that light bounces off of objects to lighten the scene.

This environment sample must be captured in linear HDR space, so we can use it without modifications when we use it to light the scene and it will be consistent with the pipeline.

We could also plug in HDR captures from real images instead of making our own.

On top of that we can use “Screen Space Reflections”, which further add to the scene looking coherent. Again we want to sample the HDR image to be consistent.

pl_ibl

We combine both for our final image – we fill in the gaps where SSR fails with our environment sample.

It’s imperative in both cases that the source images are not processed further with post-processing or tonemapping! (They can, however, be antialiased with temporal anti-aliasing)

 

Temporal Anti Aliasing

pl_taa

For specifics about Temporal AA, I can highly recommend these two papers:
Temporal Reprojection AA for INSIDE (by Playdead Games)
and
High-Quality Temporal Supersampling (by Epic Games)

Knowing when to use TAA is not trivial. On the one hand you want to apply it before your HDR post processing like bloom, for example, because it can vastly reduce flickering and strobing artifacts.

On the other hand, as seen in the second paper, TAA should be done after tonemapping, so that pixel brightness is weighted a lot less.

I went with the proposed solution from Nathan Reed:

pl_reed

 

Bloom

pl_output
For the final steps I do HDR post-processing, then tonemap down to LDR, convert back to gamma space, convert to a 32bit rendertarget and finally slap some low-quality post processing on top.

A great resource for tonemapping is, again, MJP’s blog. Highly recommended:
https://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/

pl_tonemapTonemapping basically brings down our high-dynamic range to a low-dynamic range, eg. a [0,1] range, which our screens can display. Ideally tonemapping provides us with details in all color ranges, so we don’t experience any crushed whites, for example. It can also create a more interesting, less bland image, depending on implementation ( –> filmic tonemapping)

You can see in the image to the right how the detail in the bright spots is totally crushed when not applying tonemapping.

I chose Jim Hejl’s filmic tonemapping for my implementation.

The code looks like this

float3 ToneMapFilmic_Hejl2015(float3 hdr, float whitePt)
{
float4 vh = float4(hdr, whitePt);
float4 va = (1.425 * vh) + 0.05f;
float4 vf = ((vh * va + 0.004f) / ((vh * (va + 0.55f) + 0.0491f))) – 0.0821f;
return vf.rgb / vf.www;
}

Final remarks

When tackling modern HDR rendering it is worth sitting down and making a high-level overview of the pipeline first. Making sure all elements fit together and work consistently with the same math is very rewarding at the end of day. And fun, too.

It is worth noting that I did not include some other steps in-between. For example forward rendering for transparent objects and Screen Space Ambient Occlusion are absent.

Forward Rendering passes would be done after composing the deferred image. Particles, for example, are a notable case where they could be inserted after Temporal Antialiasing even (right before HDR Bloom), since they do not usually contribute to aliasing much and are hard to tackle for TAA algorithms.

Even though this overview is specifically written for a deferred engine, most of the stuff applies to basic forward renderers, too.

I hope you enjoyed.

Advertisements

One thought on “Designing A Linear HDR Rendering Pipeline

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s