"You can choose to fight for your ruler or fight not to be ruled."
More pages: 1 2
Shader programming tips #4
Monday, April 27, 2009 | Permalink

The depth buffer is increasingly being used for more than just hidden surface removal. One of the more interesting uses is to find the position of already rendered geometry, for instance in deferred rendering, but also in plain old forward rendering. The easiest way to accomplish this is something like this:

float4 pos = float4(In.Position.xy, sampled_depth, 1.0);
float4 cpos = mul(pos, inverse_view_proj_scale_bias);
float3 world_pos = cpos.xyz / cpos.w;

The inverse_view_proj_scale matrix is the inverse of the view_proj matrix multiplied with a scale_bias matrix that brings the In.Position.xy from [0..w, 0..h] into [-1..1, -1..1] range.

The same technique can of course also be used to compute the view position instead of the world position. In many cases you're only interested in the view Z coordinate though, for instance for fading soft particles, fog distance computations, depth of field etc. While you could execute the above code and just use the Z coordinate this is more work than necessary in most cases. Unless you have a non-standard projection matrix you can do this in just two scalar instructions:

float view_z = 1.0 / (sampled_depth * ZParams.x + ZParams.y);

ZParams is a float2 constant you pass from the application containing the following values:

ZParams.x = 1.0 / far - 1.0 / near;
ZParams.y = 1.0 / near;

If you're using a reversed projection matrix with Z=0 at far plane and Z=1 at near plane you can just swap near and far in the above computation.

Name

Comment

Enter the code below



wien
Tuesday, April 28, 2009

Spooky. Just a couple of days ago I was tearing my considerable amount of hair out because I couldn't get this conversion to work properly. I eventually gave up and stored the world position directly in a texture (at huge bandwidth expense). Guess it's time to give it another whirl.

Humus
Tuesday, April 28, 2009

If you want to look at some sample code you can check any of my deferred shading demos.

wien
Tuesday, April 28, 2009

I actually have been sneaking a peek at your code, but still couldn't get it to work.

The problem is that I, for some reason unclear to me at this point, work in a right handed coordinate system. That of course means that I have to reverse all matrix multiplications etc. in most code samples I find, which makes my puny brain hurt, which in turn makes code not work.

Anyway, I got it to work now. Thanks for the tip.

Nadja
Tuesday, April 28, 2009

t�nkte bara s�ga att jag tar examen 12 juni

Jackis
Tuesday, April 28, 2009

I've thought, it's rather common approach )) That came to us, when some years ago, sitting and thinking about Z-soft particles, I decided to write down after-projection Z and W.

But we have different name to your "ZParams" uniform - "NearFarSettings"

BTW, code like this may be used to calculate scalar difference in camera units between 2 hyperbolic depth-map texels (also with 1 div only).

But this one approach works only for perspective, not for orthographic projections, so if one's visual is working in both of them - special care should be taken.

PS: Lycka till, Nadja!

Humus
Tuesday, April 28, 2009

Grattis Nadja! F�r se om jag �r uppe i Norrland d�. Har inte planerat semestern �n, men det b�rjar bli dags.

Jackis,
I'm sure other people have done the same before. If you're familiar with the math it's not hard to derive, but for less experienced coders it's not straightforward. I've been using this for a while myself, I've had a text file on this site (http://www.humus.name/temp/Linearize%20depth.txt) with basically the same stuff for some time, just wanted to get it into a blog post for easy reference in the future.

Yeah, it's a special case for normal projection matrices. For orthographic projections you can do the same in just one scalar instruction.

Java Cool Dude
Tuesday, April 28, 2009

Another way of obtaining the world position when you are applying a fullscreen effect is to pass the frustum vectors in world space as vertex attributes (four of them in total) and then multiply the now-interpolated direction by the linear depth that you previously sampled and computed in your fragment program.

Jackis
Tuesday, April 28, 2009

Yeah, parallel projection rules the world

Anyway, big-big thanks for the tips you post here!
Almost all information is very interesting (for example, was really impressed by how fans poorly affect perfomance).
The main thing - is that you was working on one of chip vendors, so you know much more cool architectural tips, than can be found in the net So I'm always waiting for your new one tip!!!

More pages: 1 2