today I want to talk about screen space emissive materials, which is a relatively simple technique I implemented, which allows for some real-time lighting in a deferred engine (along with some problems)
Emissive materials in real life would for example be fluorescents. But they can also be used as lamp/light shapes that are hard to approximate with simple point lights. You can see some examples at the end of this blog entry.
So, I was just coming off of implementing screen space ambient occlusion (SSAO) into my deferred engine, along with trying to make screen space reflections (SSR) work.
I haven’t worked with ray marching in screen space before but their power became apparent immediately.
So I went ahead and implemented something I have been thinking about for quite some time – screen space emissive materials.
The idea is pretty easy – just ray march diffuse and specular contribution for each pixel.
Per Pixel Operations
First question would be – which pixels?
The pixels used are bound by a sphere – similar to normal point lights in a deferred rendering engine (We don’t want to check every pixel on the screen when the model is only covering a small fraction). I simply take the Bounding Sphere of the model (see the smaller circle around the dragon) and multiply it by some factor, depending on the emissive properties of the material.
Then I raymarch a certain amount of times across some random vectors in a hemisphere (based on the normal) of the pixel to get the diffuse contribution. If the ray hits the emissive material I add some diffuse contribution to the pixel.
For the specular contribution I reflect the incidence vector (camera direction) on the normal and raymarch and check if I hit something. I actually use more than one reflection vector – depending on the roughness of the material this is more of a cone actually.
Here is an early result. I think it looks pretty convincing.
Now, there are 3 major problems with the way I described it above:
- If the emissive material is obstructed there is no lighting happening around it (think of a pillar in front)
- If the emissive material is outside the screen space there is no lighting happening
- The results are very noisy (see above)
In my case I went with the approach to save world space coordinates for the meshes (translated by the origin of the mesh, so precision is good). I draw the model on a new rendertarget so the scene depth is not considered and cannot obstruct.
One could go with a depth map here, but I went with this approach this time.
This makes depth comparison pretty trivial, but it may not be the most efficient solution.
Note: For each light source I – clear the emissive depth/world position map and draw the object and then calculate lighting and add to the lighting buffer. This way emissives cannot obstruct each other and I can optimize the lighting steps for each individual mesh.
Apart from that – all of the techniques that help SSAO and SSR can be applied here. Bilateral blur would be a prime example here.
Another often used solution that helps here is to actually change the noisy vectors per frame and then use some temporal accumulation to smooth out the results.
Simply more samples per pixel is the most simple solution obviously, but often performance limitations do not allow for that.
Screen Space Limitations
As soon as the materials aren’t visible any more the whole thing breaks basically, since we can’t ray march against anything any more.
Philippe Rollin on twitter (@prollin) suggested to “render a bigger frame but show only a part of it”
This would be a performance problem if we had to render the whole frame in a bigger resolution, but since we draw the emissive material to another texture we can use a neat trick here:
Then, when calculating the lighting, we reproject our screen coordinates to the new view*projection matrix to sample from there. Barely any cost.
Now, the local resolution goes down a bit, but, for a factor of 2 for example, it is not noticeable at all.
To address this issue one could change the alternate field of view depending on how much “out of view” the meshes are, but I found the results to be good enough with a constant factor 2.
Note: Simply changing the FOV is pretty naive. It would be more beneficial to also change the aspect ratio so the amount of additional coverage to the top/bottom is equal to the sides. A larger FOV gives proportionally more coverage in x-direction than in y direction if the aspect ratio is > 1. This should be adjusted for.
Can there be done anything about that?
Well you can always draw several emissive rendertargets with different orientations/projections and then check against each one of them (as is suggested for so many screen space effects), but this is honestly not viable in terms of performance.
What I would rather suggest is a fade to a deferred light with similar strength. Not optimal, but people overlook so many rendering errors and discontinuities it might work? I don’t know.
So yeah, that’s all thanks for reading, I hope you all enjoyed it. Bye :)
(click on the image above for a large view. Note that SSAO creates black shadows below the dragon, which obviously doesn’t make any sense with an emissive material)
Performance is relatively bad – right now. As you can see in the image above the emissive effect (which at that proximity covers all pixels) costs ~15 ms for one material only at ~1080p (on a Radeon R9 280)
The SSEM is rendered in full resolution with – 16 samples with 8 ray march steps for diffuse and 4 samples with 4 steps for specular.
There is a lot of room for improvements – as mentioned above. Especially diffuse doesn’t have to be rendered full-resolution, a half or even quarter resolution with bilateral blur and upscale would most likely have little impact on visual fidelity.
A smaller sample count makes the results more noisy – but that can be helped with some blur also, especially for diffuse.
In this picture we have many more emissive materials, again at 26ms total at ~1080p. Because the screen coverage / overdraw is relatively low the performance is not much worse.
Conclusion and further research
I presented a basic solution for rendering emissive materials in real time applications and proposed possible solutions / work-arounds for typical issues with screen space ray marching based algorithms. It does not need precomputation and all meshes can be transformed during runtime.
I am not sure whether or not this can actually be viable in high performance applications, but I am confident the rendering cost can be much improved.
I am sorry for not providing any code or pseudo code snippets, maybe I’ll update the article eventually.
A possible extension of this method would be to have textured materials and read out the color at the sampling position to add to the lighting contribution. This would greatly extend the use of the implementation but bring a number of new problems with it, for example when some color values are occluded.
Making the rendering more physically based would be another goal, currently there are many issues with accuracy and false positives/negatives based on wrong ray marching assumptions in my version.