More pages: 1
... 14 15 16 17 18
19 20 21 22 23 24
Another HDR camera
Thursday, February 19, 2009 | Permalink
Whoohaa, wouldn't have expected another company to get into the HDR field so quickly, but Ricoh just announced the CX1
which also sports this features. They are taking a different approach though. Fujifilm went with a native HDR sensor, whereas Ricoh does the traditional combined exposures technique, except the camera does it automatically instead of relying on the user to do a lot of manual work at the computer to combine the images. It'll be interesting to see how well this turns out. It could potentially result in better image quality, but it could also be prone to mismatching exposures if you're shooting handheld, depending on the delay between the shots. It would also be nice if the camera did more than just two shots for greater dynamic range. With two vendors launching their HDR cameras so close to enough other I suspect we'll see more coming in the near future. In a few years we may be in a position where doing exposure digitally as a postprocess becomes as natural as doing white balance digitally.
Speaking of white balance, another very interesting feature of the camera is "Multi-Pattern Auto White Balance". It's a quite common problem that different areas of a scene is lit with different light sources and require diffent white balance to look good, for instance shooting indoor with a window visible. The indoor and outdoor parts of the image will require different white balance settings due to the difference in light temperature. But cameras apply one white balance setting to the whole scene. So either the outdoor will look very blue or the indoor very yellow, or neither will look correct. This camera is supposedly capable to doing white balance locally. How well this will work will be very interesting to see. From the description it sounds like it's applying white balance on a tile basis, so I suppose areas with pixels affected by both light sources may still look bad.
[ 0 comments
Monday, February 16, 2009 | Permalink
HDR has been kind of a buzz-word in photography for the last few years. Some of this may be related to that HDR has also been a much talked about subject in GPU rendering in the last few years as well. There are techniques for taking HDR photos with standard camera equipment using multiple exposures. We've also seen photographic packages such as Photoshop adding this functionality. Meanwhile there have from time to time been talks about new sensor technology for HDR photography, although little has seen the light of day.
Last September Fujifilm announced
their new Super CCD EXR sensor promising improved dynamic range. Now that's something we've heard before, so I didn't pay much attention to it. Just recently they released the F200EXR
camera based on this sensor. It appears this might just be the first HDR capable camera on the market. I don't think it'll produce actual HDR images, but it can capture an 800% expanded range, or a 0..8 range if you will, tonemapped to a nice looking image where other cameras would either have to underexpose or get blown out highlights. The camera accomplishes this through pixel binning where different sensor pixels capture different exposure ranges. As a result, you'll only get a 6MP image instead of 12MP when using this technique, a tradeoff I'm more than willing to do. 12MP is already far beyond what's meaningful to put into camera anyway, particularly a compact.
Since the camera is new there aren't many reviews for it out there, but I've at least found this Czech site
which has some samples. If those are representative of what this camera can do this may very well be my next compact camera.
A word of caution though. Looking in the EXIF tags of the pictures it seems they aren't all straight from the camera. Some have the camera name listed, others have "Adobe Lightroom", suggesting that they may have been processed in some way. The first sample pair lists the camera name though, so I'm going to assume at least those are unprocessed.
In any case, this is a very exciting development. I hope to see similar technology from other vendors as well, and I would love to see this stuff in an SLR.
[ 10 comments
| Last comment by lone (2009-03-28 16:24:49)
Shader programming tips #3
Monday, February 9, 2009 | Permalink
Multiplying a vector by a matrix is one of the most common tasks you do in graphics programming. A full matrix multiplication is generally four float4 vector instructions. Depending on whether you have a row major or column major matrix, and whether you multiply the vector from the left or right, the result is either a DP4-DP4-DP4-DP4 or MUL-MAD-MAD-MAD sequence.
In the vast majority of the cases the w component of the vector is 1.0, and in this case you can optimize it down to three instructions. For this to work, declare your matrix as row_major. If you previously was passing a column major matrix you'll need to transpose it before passing it to the shader. You need a matrix that works with mul(vertex, matrix) when declared as row_major. Then you can do something like this to accomplish the transformation in three MAD instructions:
pos = view_proj * vertex.z + view_proj;
pos += view_proj * vertex.x;
pos += view_proj * vertex.y;
It should be mentioned that vs_3_0 has a read port limitation, and since the first line is using two different constant registers, HLSL will put a MOV instruction in there as well. But the hardware can be more flexible (for instance ATI cards are). In vs_4_0 there's no such limitation and HLSL will generate a three instruction sequence.
[ 0 comments
Ceiling cat is watching me code
Sunday, February 8, 2009 | Permalink
I can has ceiling cat? Yez I can! Also, I can has stikky fingurz from all teh glouh.
Paper model available here
. Print, glue and attach to ceiling.
[ 6 comments
| Last comment by phillyx (2009-03-01 23:29:38)
Shader programming tips #2
Wednesday, February 4, 2009 | Permalink
Closely related to what I mentioned in tips #1, it's of great importance to use parantheses properly. HLSL and GLSL evaluate expressions left to right, just like C/C++. If you're multiplying vectors and scalars together the number of operations generated may differ a lot. Consider this code:
float4 result = In.color.rgba * In.intensity * 1.7;
This will result in the color vector being multiplied with the intensity scalar, which is 4 scalar operations. The result is then multipled with 1.7, which is another 4 scalar operations, for a total of 8. Now try this:
float4 result = In.color.rgba * (In.intensity * 1.7);
Intensity is now multiplied by 1.7, which is a single operation, and then the result is multiplied with color, which is 4, for a total of five scalar operations. A save of three instructions by merely placing parantheses in the code.
Shouldn't the compiler be smart enough to figure this thing out by itself? Not really. HLSL will sometimes merge constants when it considers this safe to do. However, when dealing with variables that have values with unknown range the compiler cannot make the assumption that multiplying in another order will give the same result. For instance 1e-20 * 1e30 * 1e10 will result in 1e20 if you multiply left to right, whereas 1e-20 * (1e30 * 1e10) will result in an overflow and return INF.
In general I recommend that you even place parantheses around compile-time constants to make sure the compiler merge them when appropriate.
[ 1 comments
| Last comment by ruysch (2010-01-02 16:32:38)
Sunday, February 1, 2009 | Permalink
I was sent a link to this blog by a co-worker recently:
I Get Your Fail
High recognition factor for anyone in game development.
And good entertainment even if you're not.
[ 1 comments
| Last comment by dvoid (2009-02-01 14:32:28)
Shader programming tips #1
Thursday, January 29, 2009 | Permalink
DX9 generation hardware was largely vector based. The DX10 generation hardware on the other hand is generally scalar based. This is true for both ATI and Nvidia cards. The Nvidia chips are fully scalar, and while the ATI chips still have explicit parallelism the 5 scalars within an instruction slot don't need to perform the same operation or operate on the same registers. This is important to remember and should affect how you write shader code. Take for instance this simple diffuse lighting computation:
float3 lightVec = normalize(In.lightVec);
float3 normal = normalize(In.normal);
float diffuse = saturate(dot(lightVec, normal));
A normalize is essentially a DP3-RSQ-MUL sequence. DP3 and MUL are 3-way vector instructions and RSQ is scalar. The shader above will thus be 3 x DP3 + 2 x MUL + 2 x RSQ for a total of 17 scalar operations.
Now instead of multiplying the RSQ values into the vectors, why don't we just multiply those scalars into the final scalar instead? Then we would get this shader:
float lightVecRSQ = rsqrt(dot(In.lightVec, In.lightVec));
float normalRSQ = rsqrt(dot(In.normal, In.normal));
float diffuse = saturate(dot(In.lightVec, In.normal) * lightVecRSQ * normalRSQ);
This replaces two vector multiplications with two scalar multiplications, saving us a 4 scalar operations. The math savvy may also recognize that rsqrt(x) * rsqrt(y) = rsqrt(x * y). So we can simplify it to:
float lightVecSQ = dot(In.lightVec, In.lightVec);
float normalSQ = dot(In.normal, In.normal);
float diffuse = saturate(dot(In.lightVec, In.normal) * rsqrt(lightVecSQ * normalSQ));
We are now down to 12 operations instead of 17. Checking things out in GPU Shader Analyzer
showed that the final instruction count is 5 in both cases, but the latter shader leaves more empty scalars which you can fill with other useful work.
It should be mentioned that while this gives the best benefit to modern DX10 cards it was always good to do these kind of scalarizations. It often helps older cards too. For instance on the R300-R580 generation it often meant more instructions could fit into the scalar pipe (they were vec3+scalar) instead of utilizing the vector pipe.
[ 1 comments
| Last comment by sqrt[-1] (2009-01-31 14:32:40)
Custom alpha to coverage
Sunday, January 25, 2009 | Permalink
In DX10.1 you can write a custom sample mask to an SV_Coverage output. This nice little feature hasn't exactly received a lot of media coverage (haha!). Basically it's an uint where every bit tells to which samples in the multisample render target the output will be written to. For instance if you set it to 0x3 the output will be written to samples 0 and 1, and leave the rest of the samples unmodified.
What can you use it for? The most obvious thing is to create a custom alpha-to-coverage. Alpha-to-coverage simply converts the output alpha into a sample mask. If you can provide a better sample mask than the hardware, you'll get better quality. And quite frankly, the hardware implementations of alpha-to-coverage hasn't exactly impressed us with their quality. You can often see very obvious and repetitive dither patterns.
So I made a simple test with a pseudo-random value based on screen-space position. The left image is the standard alpha-to-coverage on an HD 3870x2, and on the right my custom alpha-to-coverage.
[ 4 comments
| Last comment by Dr Black Adder (2011-10-14 01:08:17)
More pages: 1
... 14 15 16 17 18
19 20 21 22 23 24