"We hold these truths to be self evident, that all men are created equal"
- Martin Luther King
More pages: 1 ... 11 12 13 14 15 16 17 18 19 20 21 ... 31 ... 41 ... 47
Duke Nukem Forever
Thursday, May 7, 2009 | Permalink

Duke Nukem Whenever
Duke Nukem Taking Forever
Duke Nukem If Ever
Duke Nukem Never, and that's official.

[ 3 comments | Last comment by Micke (2009-05-20 01:07:34) ]

The most horrible interface ever
Wednesday, May 6, 2009 | Permalink

Here's one vote for Win32 API handling of cursors. Trying to squeeze a custom cursor through Windows' tight intestines and not getting screwed in the process in one way or another is easier said than done.

So you have a custom cursor and you called SetCursor() to use it. What if you want to hide it?

ShowCursor(FALSE)?

*BZZZZTT* Wrong answer!

SetCursor(NULL)?

*BZZZZTT* Wrong answer!

Correct answer:

PostMessage(hwnd, WM_SETCURSOR, (WPARAM) hwnd, HTCLIENT);

and then

case WM_SETCURSOR:
SetCursor(NULL);
return TRUE;

No, just calling SetCursor(NULL) directly doesn't work. You really need to do it in response to WM_SETCURSOR, or it won't work. Or at least doesn't take effect until you move the mouse or click. Or just wait a few seconds until Windows randomly postes a WM_SETCURSOR message to your window for absolutely no reason. What the heck, does that happen to cover up that things aren't really working under the hood?

[ 7 comments | Last comment by Humus (2009-05-11 22:21:33) ]

Shader programming tips #5
Friday, May 1, 2009 | Permalink

In the comments to Shader programming tips #4 Java Cool Dude mentioned that for fullscreen passes you can pass an interpolated direction vector and use the linearized depth to compute the world position. Basically with view_z computed using the math in #4 you compute:

float3 world_pos = cam_pos + In.dir * view_z;

This amounts to only two scalar operations and one float3 to carry out.

What about regular non-fullscreen passes? Turns out you can do that as well using a nice DX10 feature. The problem is that we need to interpolate the direction vector in screen space, rather than doing perspective correction. For screen aligned primitives it's the same thing, so it works out in this case, but for "normal" mesh data it's a completely different story. In DX10, and also in GLSL, there's now a noperspective keyword you can add to your interpolator, which changes the interpolation mode to eliminate the perspective correction, thus giving you an interpolation that's linear in screen space instead.

How do we compute the direction vector? Just take the position you're writing out from the vertex shader and push it to the far plane. Depending on if you're using a reversed projection matrix or not you either want Z=0 or Z=1, which can be done with using float4(Out.Position.xy, 0, Out.Position.w) or Out.Position.xyww respectively in homogenous coordinates. Transform this vector with the inverse view_proj matrix to get the world position of the far plane equivalent of the point in world space. Now subtract cam_pos from this and that's the direction vector. Instead of subtracting the cam_pos in the vertex shader you can just bake that into the same matrix and get it for free. The resulting vertex shader snippet for this is something like this:

float4 dir = mul(view_proj_inv, Out.position.xyww);
Out.dir = dir.xyz / dir.w;

Finally, note that view_z as computed in #4 goes from 0 at the camera to far_plane at the far clipping plane. For this computation we need it to be 1.0 at the far clipping plane, which can be done by simply multiplying ZParams with far_plane. Alternatively you can divide Out.dir with far_plane in the vertex shader.

[ 0 comments ]

Happy Anniversary AMD!
Friday, May 1, 2009 | Permalink

Today AMD turns 40. And what's a 40 year old without a bit of a crisis? Congratulations and good luck in the future!

I worked for AMD for about a year. I came originally from ATI but became part of AMD due to the merger in 2006. Continued with the same kind of job under AMD until the fall of 2007, after which I moved home to Sweden again and joined Avalanche Studios. Greetings to all my old friends that are still at AMD! Keep up the good work!

[ 0 comments ]

Today's whine
Wednesday, April 29, 2009 | Permalink

As I touched on briefly in my piracy post on piracy, we live in a globalized world and regionalizing digital products makes absolutely no sense in this time and age. This is not something that the average broadcast corporation understands, but some do. Or did. It's with great disappointment I see that Comedy Central has now implemented geofiltering, the perhaps most ironic and moronic kind of technology ever invented, so now my favorite show The Daily Show with John Stewart is no longer "available" in my "area". What they are hoping to achieve with this is beyond my comprehension. What the result is going to be is obvious though, their shows will get a smaller audience, and those who care enough will pirate it. And the pirated versions are of course stripped from all the commercials, the bread and butter of broadcasting corporations.

Oh well, in another 10-15 years the average CEO will be of my generation. By then I hope this kind of nonsense finally comes to an end.

[ 8 comments | Last comment by sqrt[-1] (2009-05-08 12:34:37) ]

Hmmm ...
Tuesday, April 28, 2009 | Permalink



Some left over debug code in Nvidia's webshop?
Or perhaps a warning regarding how truthful the information on the site is?

Screenshot is from yesterday night. Did not happen when I tried today.

[ 1 comments | Last comment by drp (2009-04-30 02:39:41) ]

Shader programming tips #4
Monday, April 27, 2009 | Permalink

The depth buffer is increasingly being used for more than just hidden surface removal. One of the more interesting uses is to find the position of already rendered geometry, for instance in deferred rendering, but also in plain old forward rendering. The easiest way to accomplish this is something like this:

float4 pos = float4(In.Position.xy, sampled_depth, 1.0);
float4 cpos = mul(pos, inverse_view_proj_scale_bias);
float3 world_pos = cpos.xyz / cpos.w;

The inverse_view_proj_scale matrix is the inverse of the view_proj matrix multiplied with a scale_bias matrix that brings the In.Position.xy from [0..w, 0..h] into [-1..1, -1..1] range.

The same technique can of course also be used to compute the view position instead of the world position. In many cases you're only interested in the view Z coordinate though, for instance for fading soft particles, fog distance computations, depth of field etc. While you could execute the above code and just use the Z coordinate this is more work than necessary in most cases. Unless you have a non-standard projection matrix you can do this in just two scalar instructions:

float view_z = 1.0 / (sampled_depth * ZParams.x + ZParams.y);

ZParams is a float2 constant you pass from the application containing the following values:

ZParams.x = 1.0 / far - 1.0 / near;
ZParams.y = 1.0 / near;

If you're using a reversed projection matrix with Z=0 at far plane and Z=1 at near plane you can just swap near and far in the above computation.

[ 9 comments | Last comment by Greg (2009-05-01 19:33:02) ]

A couple of notes about Z
Tuesday, April 21, 2009 | Permalink

It is often said that Z is non-linear, whereas W is linear. This gives a W-buffer a uniformly distributed resolution across the view frustum, whereas a Z-buffer has better precision close up and poor precision in the distance. Given that objects don't normally get thicker just because they are farther away a W-buffer generally has fewer artifacts on the same number of bits than a Z-buffer. In the past some hardware has supported a W-buffer, but these days they are considered deprecated and hardware don't implement it anymore. Why, aren't they better? Not really. Here's why:

While W is linear in view space it's not linear in screen space. Z, which is non-linear in view space, is on the other hand linear in screen space. This fact can be observed by a simple shader in DX10:

float dx = ddx(In.position.z);
float dy = ddy(In.position.z);
return 1000.0 * float4(abs(dx), abs(dy), 0, 0);

Here In.position is SV_Position. The result looks something like this:


Note how all surfaces appear single colored. The difference in Z pixel-to-pixel is the same across any given primitive. This matters a lot to hardware. One reason is that interpolating Z is cheaper than interpolating W. Z does not have to be perspective corrected. With cheaper units in hardware you can reject a larger number of pixels per cycle with the same transistor budget. This of course matters a lot for pre-Z passes and shadow maps. With modern hardware linearity in screen space also turned out to be a very useful property for Z optimizations. Given that the gradient is constant across the primitive it's also relatively easy to compute the exact depth range within a tile for Hi-Z culling. It also means techniques such as Z-compression are possible. With a constant Z delta in X and Y you don't need to store a lot of information to be able to fully recover all Z values in a tile, provided that the primitive covered the entire tile.

These days the depth buffer is increasingly being used for other purposes than just hidden surface removal. Being linear in screen space turns out to be a very desirable property for post-processing. Assume for instance that you want to do edge detection on the depth buffer, perhaps for antialiasing by blurring edges. This is easily done by comparing a pixel's depth with its neighbors' depths. With Z values you have constant pixel-to-pixel deltas, except for across edges of course. This is easy to detect by comparing the delta to the left and to the right, and if they don't match (with some epsilon) you crossed an edge. And then of course the same with up-down and diagonally as well. This way you can also reject pixels that don't belong to the same surface if you implement say a blur filter but don't want to blur across edges, for instance for smoothing out artifacts in screen space effects, such as SSAO with relatively sparse sampling.

What about the precision in view space when doing hidden surface removal then, which is still is the main use of a depth buffer? You can regain most of the lost precision compared to W-buffering by switching to a floating point depth buffer. This way you get two types of non-linearities that to a large extent cancel each other out, that from Z and that from a floating point representation. For this to work you have to flip the depth buffer so that the far plane is 0.0 and the near plane 1.0, which is something that's recommended even if you're using a fixed point buffer since it also improves the precision on the math during transformation. You also have to switch the depth test from LESS to GREATER. If you're relying on a library function to compute your projection matrix, for instance D3DXMatrixPerspectiveFovLH(), the easiest way to accomplish this is to just swap the near and far parameters.

Z ya!

[ 12 comments | Last comment by crazii (2016-06-12 05:38:21) ]

More pages: 1 ... 11 12 13 14 15 16 17 18 19 20 21 ... 31 ... 41 ... 47