Roughness mip maps based on normal maps?

 

image
Welcome to a little thought experiment of mine!

I haven’t really been in touch with rendering technology for the past few months, but today the sunshine brought a new idea into my head:
Instead of using default super sampling to generate our roughness mip maps, why not improve the quality by using normals?

This whole post is more of an idea – not an implementation guide with specific numbers!

If you just want to read about the idea, scroll down to “Integrating information from normal maps into roughness maps”

Geometry representation in games

First a small recap about how we usually display meshes and their shape in realtime 3d engines.

On the largest scale we use triangles (“polygons”) to actually shape the general geometry of an object.

For example a house:image

On the detail scale we use normal maps

image

On the smallest scale we use roughness / smoothness maps

image

Basically all of them are about how light interacts with a surface.

For the house, it’s pretty obvious it will bounce off the surface in general.

For more details, like for example some holes or structures in the wood we use a normal map which tells the light which way to bounce without using more polygons. Every pixel basically represents the direction the surface is facing.

image

For even smaller details, which are much much smaller than we can capture with the camera (and often with our eyes) it’s basically the same thing, except that we basically sampled down our “normal map”.

Think of it like a normal map which is so incredibly fine that we just get an average of thousands of small sample points which just tells us how many different directions our light bounces to on a micro scale.

 

If the light bounces off in all kind of directions we think of the surface as “rough” (think dry mud), if it bounces very uniformly in one direction we think of it as “smooth” (think a mirror).

image

Mip Mapping

A common problem is that even normal maps have “too much” detail in a normal scene. Quick recap:

We project our 3d scene onto a grid of pixels, for example in 1080p it’s a resolution of 1920 x 1080 pixels.

image

 

Let’s assume we have an object and it has a texture, as well as a normal map applied to it. Both have some decent resolution, like 512 x 512 pixels.

However, if the object is far away from the camera all of it’s geometry might only cover a few pixels at all. In that case one pixel on our screen may cover the whole of the object and we really cannot show all that detail stored in the 512 x 512 texture.

So we basically need to take an average of all the pixels that are afterwards “compressed” into one pixel in our projection.

Example: An object is just far away enough that 2 x 2 pixels of it’s surface cover 1×1 pixel on the camera projection.

image

Because it is too expensive to average all the texture information during runtime, we simply store some representations of how the texture looks scaled down and then the graphics card selects the right layer during runtime.

image

This is called mip mapping, and you probably already know about it, but if you don’t there is plenty of information about it on the internet.

Normal mapping is an interesting example because instead of averaging colors we average out vectors!

image

So in the example above we can already see the problem.

Where before we had a very rough surface, after averaging it down we suddenly are left with no more detail representation.

Integrating information from normal maps into roughness maps

If we think about our progression of shape representation in the way of

Geometry –> Normal maps –> roughness maps

It would make sense to integrate this lost information into our roughness map.

Our roughness maps stores the “roughness” of a surface, or a cone (more specifically a lobe) in which direction the light can reflect at a micro scale. So when our normal information gets lost during downsampling – why not make it part of that microscale?

 

Example:

So in the example above we have the following case: A highly reflective material, for example clean metal, has a texture with a lot of small bumps.

This could for example be a metal with a grate structure. Here is an example render from my engine:

image

Now, if we move further and further away eventually the normal map will be at such a low mip map level that we do not see any details any more.

However, the reflectivity of our metal didn’t change. So the detail is lost, but the “roughness map” stays the same, in this case we basically have a mirror once we are far enough away.

image

Since the images in my blog are waaaaaaaay too small to make something like this visible, I opted for manually decreasing LOD distance. image

Note: This could also happen if LOD levels are reduced because of user settings, so it’s not that far off of some real world cases.

We basically have a mirror now.

But realistically we should have something like this:

image

So our resulting roughness would result in an image like this, which is much closer to what we had with the normal map still intact.

image

So that’s it from me for today.

I hope you liked the read :)

Advertisements

3 thoughts on “Roughness mip maps based on normal maps?

  1. A method for accomplishing this is described in Valve’s “Advanced VR Rendering”. The method stores normal maps as [X, Y, roughnessX, roughnessY], with each mipmap ‘integrating’ the divergence of normals of the detailed version. It works quite well for creating isotropic normals as long as they are specific to 2 dimensions.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s