Deferred Engine Progress Part 3

It’s been a while since I posted about the deferred engine progress, but here it is – a lot has changed and I have stuck mostly to writing about it on the thread on monogame.net.
As always you can find the solution on github

Renderer

I uploaded this video a week ago, only some UX features and SSR quality has changed since

As detailed in my latest blog entry I am now running a fully linear HDR pipeline. I reorded the pipeline steps to make sure everything is coherent and works hand-in-hand with the other elements. For example I made sure that my environment cubemaps are stored in gamma 1.0 HDR format, so they can be used directly for specular and diffuse contribution to the light buffer.

df3_bloomWith correctly working HDR I have also changed my bloom filter to not work with any thresholds any more and adjusted it to work well with the default settings. I use a 5 mip blur, as proposed by Jorge Jimenez in “Next Generation Post Processing in Call of Duty AW”. It’s a modified version of my Bloomfilter for Monogame (github) as it uses higher precision buffers to account for the HDR input.^
Since i don’t use thresholds the image gets a bit brighter in general, but this can be offset by auto-exposure since bloom is applied before tonemapping (I use manual exposure)

df3_ssrMy Screen Space reflections (gif above: SSR enabled, SSR disabled) were modified to have some sort of stochastic approximation of different roughness values of surfaces, by changing the reflection vector every frame to simulate the microsurface details. At one point I had it approximate PBRish lobes, but right now it’s using simple cones without much accuracy. For the search, I step in the direction of the reflection, and when I hit something (my ray is “inside” an object) i use a few backward steps until I am “out” again. I then approximate the hit point by weighting the depth differences. Should you attempt to come up with your own algorithms do not forget that marching along a line in an orthonormal space (like view or world space) will not yield in a straight line in view projection space!

df3_ssaoI’ve had simple SSAO before, but I have tried a variety of new algorithms, and right now i am using Horizon Based Ambient Occlusion. Horizon based methods are very interesting, I would highly recommend checking out Practical Real-Time Strategies for Accurate Indirect Occlusion by Jorge Jimenez (you can find ppt and pdf at the link), which proposes a different evaluation based on the results of simple line stepping. One point I would like to explore deeper in the future is how to extract bent normals and reflection occlusion from horizon based methods, to further enhance the quality of lighting and specular reflections. SSGI and SSDO are both interesting things to do, but I’m not sure how well it performs to be light-dependent in these passes, since I possibly cast hundreds of lights. Passing a giant array of all lights (position, radius) and check that for every pixel seems ill-advised. I think with a tiled or clustered renderer this might be viable though.

sm_result3

As detailed in this blog entry, I changed my point light shadows to have better filtering. Right now I support deferred point and directional lights. Point lights can also have “volumetric” properties. These are implemented for fun and have no realism attached. Usually you would want to have the participating medium, like fog, be a seperate entity that is affected by the light, but in this case I df3_volumecarry the “fog” around with the light. The rendering process for shadowed and unshadowed volumes is completely different though. For the unshadowed one I calculate the entry and exit point of our camera ray with some geometry math (line/sphere intersection) and combine that info with the current pixel’s depth to calculate the distance traveled by our camera ray.
For the shadowed approach i simply march through and sample the shadow map at discrete points and add up the lit samples. With some static and temporal noise i can have good results with few steps and high accuracy (10 by default). Note that volumetrics are usually a good candidate for half-resolution rendertargets, since they are usually low-frequency. Should I separate volumetrics from lights I would surely use half-res.

df3_decals

Deferred Decals are a thing that has been added since I created the video at the beginning. Right now they are a low-effort integration which only changes the albedo value of the g-buffer, since I don’t have much use cases for other implementations, but they can be easily made to extent to add to the other buffers, too. They were basically implemented, since someone asked how to do it here

df3_gbuffer

Speaking of the g-buffer, this is my current setup. It has changed a bit since the last update, since it now works with view space normals and linear view space depth. That change is also the reason why a lot of previously available features aren’t working any more, since I had to rewrite everything to work with the new buffer and decided to not update some features I deemed unnecessary.

It should be noted that I still have one and a half channels available, since I could easily compress view space normals to use only 2 colors (the third one always points to the camera) and I don’t need 16bit precision for roughness.

A worthy thing to include might be a velocity buffer. This way I can simulate motion blur and, more importantly, I could vastly improve my temporal anti-aliasing to handle moving objects gracefully. This would require additional setup for moving objects, but it’s not very complicated (I implemented motion blur like this here)

User Experience

df3_guiI have developed an easy to use GUI throughout my other projects, which needed an easy input to change shader variables. I’ve made use of this gui in my deferred engine, too, and it should be much much easier to quickly change shader values and modify the scene.

df3_infoI have also included an option to show all keyboard controls on screen, as you can see on the left. This should make it even easier for others to get along in the viewport.

editor4Another thing I’ve added is a basic set of highlighting tools (which you can turn off/on from the gui, to further enhance readability.)

Not shown in the video is a new part of the GUI that shows the current transformation mode, so you do not have to use keyboard commands to change modes. Along with that I have added the ability to scale in all directions and to change the gizmos to be in a local coordinate system. This should make it much easier to setup the scene the way you want to.

Quick overview of gizmo controls

I think adding a model viewer with drag&drop as well as a material editor would be neat, but at that point i might look to make a new editor from the ground up (with a history / ctrl-z and the ability to save/load scene setups).

Code

When I started out with this project i just wanted to create a basic deferred rendered, but the project outgrew the original scope by a wide margin now. While it was fine to originally have all the rendering done in one class (at the time i was also finding out that inlining is great for performance!) I am now working to outsource every part of the rendering into it’s own render module for easier maintenance and iteration from my side. This will take a while, but things are looking good.

I have also refactored naming and outsourced the GUI into it’s own project.

Either way, the current state should be much more readable than it used to be 8 months ago.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s