Short video overview:
In real life and expensive rendering solutions an image is not a real “snapshot” but rather the continious integration of samples (or photons if you will) over a frame’s duration.
In real time rendering it’s not feasible to render multiple frames and average them into the final one, since we don’t get framerates quite so high.
So the usual approach is to render a velocity buffer that basically says how much a pixel has moved per frame. We can do that by passing both the current and the previous model view projection, calculating their position and saving the difference in screen space coordinates. This can be done with MRTs easily.
Then in a post processing pass we can blur each pixel in the direction of the velocity information.
For more information see the tutorial from john chapman, which I linked below.
A few issues remain:
- For high framerates the difference between frames is not enough to have motion blur visible. We can calculate the factor between actual frametime and target frame time (for example 50 ms for 25Hz. If the actual frametime is 25ms our factor is 2) and increase the motion blur by that factor to give it a consistent strength across variable framerates.
- The motion blur is limited only to the object. This is not what we actually want, since we want to smear the object with the background, so we need to somehow enlarge the blur field.
This can be done with various dilation methods, a good reference is the presentation about “Next-Generation-Post-Processing in Call-of-Duty” linked below.
What I did instead of any pixel operations is to render the velocity buffer in a whole different pass and extrude the balls in the direction of the velocity vector (and away from it on the backside).
This is sort of “hacky” and it only works well for this sample, in many cases the vertex displacements are not accurate enough for what we want.
Also keep in mind that this motion blur technique is not super cheap. Your velocity buffer probably needs more than 8bit precision (i worked with 16bit). And you need to pass an additional transformation matrix to the vertex buffer.
The second point might not sound like much, but if you are using skinned meshes you need to pass not only your current bone transformations, but also the ones from the last frame. If you are using directx9 you may run out of registers for the vertex buffer.
- John Chapman’s blog entry
- Jorge Jimenez: Next Generation Post Processing in Call of Duty AW
Order Independent Transparency
Just something I’ve implemented / copied based on Morgan McGuire’s work.
It’s really nothing to write home about from my side, I originally intended to research this topic more in depth, but haven’t found time or motivation to do so.
The whole premise of order independent transparency is that you don’t have to worry about the order in which you draw your transparent objects. This is not trivial at all, but Mr. McGuire and his team have found a relatively easy to implement solution.
In my tests I haven’t found it to be super robust and the algorithm seemed to struggle with relatively high alpha values, but that may be my fault.
The upside is the ease of use and implementation compared to all other relevant research, which usually revolves around some sort of per-pixel lists and compute shader computations. This one works with basic pixel shaders and some blend modes.
The one thing I did have to do is to implement different blend modes for MRT in monogame. By default Monogame/XNA uses the same blend mode for all rendertargets.
This behaviour can be easily changed if TargetBlendState inside the BlendState class is set to public and then manipulated. TargetBlendState is an array of blendstates, with each one corresponding to the rt in an MRT setup, so we have to change TargetBlendState for our second RT for example.
- older version: http://casual-effects.blogspot.de/2014/03/weighted-blended-order-independent.html