"CS has destroyed my life. I can't shower anymore because I think my neighbors use wallhack."
More pages: 1 2 3
New Radeon SDK
Tuesday, June 26, 2007 | Permalink

AMD has just released a new SDK with some cool DX10 samples, of which I wrote two. There's also a programming and optimization guide for the HD 2000 series (also written by me) plus some other cool stuff. Check it out!

In other news, I just got back from a month long trip to Sweden and Denmark. During my time away I haven't had a chance to write any demos or do any other updates on this site. But now I'm back.
Oh, there's one update that I actually posted a while ago, but didn't write a news item about. I added Mac support to the framework, so now the OpenGL demos should compile and work on Mac too. You'll have to create the XCode projects yourself though as I don't have the Mac anymore and thus can't create any project files for the Mac. The reason I didn't mention this here earlier is that the Mac I bought was a gift for my niece and I wanted it to be a surprise for her, so I didn't want to give her any hints in case she would check in on my site. So the Mac support has been there for a bit over a month now.

Another update is that I finally got a HD 2900XT card for my own system at home, so now I just need to start coding some DX10 demos.

Name

Comment

Enter the code below



fellix
Sunday, July 1, 2007

G80 is not only slow in GS but, more or less, in the trivial vertex processing. Large batch size to blame, at most...

Humus, any remote intentions of yours, to utilize the HW Tessellator in some cool demo, probably an OGL code?

Humus
Sunday, July 1, 2007

Greg, only Nvidia would know for sure, but from our testing it appears to be the G80 architecture. I don't think improved drivers will be able to salvage it.

fellix, we'll see. It's not finalized yet, so it's not something I can start working on right now, but eventually I guess I'll do something with it.

dpoon
Monday, July 2, 2007

I can confirm the Nvidia GS problem. Ran the AMD DX10 samples on a HD 2900 XT and got around 20-40 fps. Ran same samples on a 8800 GTS and got maybe 1.5 fps. Both systems are almost identical in specs except for the graphics card. All the latest drivers, updates etc. were used.

eXile
Monday, July 2, 2007

Thats strange ... normally nVidia doesn't build crap :P

Ningu
Monday, July 2, 2007

Nonsense,
The DX10 samples don't run well on Nvidia only because they are made by AMD, for AMD.
I'm sure Nvidia could make lovely DX10 demos that work just as well on their 8800.
(In fact, they do) for example: http://www.nzone.com/object/nzone_cascades_home.html

But anywho, nice to see that you've been productive, Humus!

Humus
Monday, July 2, 2007

Well, of course they were optimized for AMD, but that doesn't mean that you could do something similar very fast on Nvidia if you were to optimize for them. In fact, I experimented with that, and sure they could be able to run the GI sample at maybe 5fps if I tune the rendering to be more "DX9 style" as a workaround for their slow GS, but that's still nowhere close to the AMD performance, plus that what's the point of DX10 if you have to code in DX9 style to make it run fast? For the HD 2900 the DX10 style rendering runs faster, as you would expect. The truth is that the G80's GS performance is poor except for trivial shaders. If you do any form of geometry amplification (which is the main motivation behind the GS in the first place) the performance drops off exponentially.

fellix
Monday, July 2, 2007

Given the much deeper market penetration of the G80 products, does this mean that the initial DX10 titles to come, will somewhat avoid using rich and dynamic procedural environments, and thus under-utilizing the vast potential of R600 in this area... and we still haven't touched the HW Tessellator yet here, as it is not a mandatory API part, at this stage.

Nadja
Friday, July 6, 2007

ska du inte l�gga upp n� bilder fr�n din resa snart? XD

More pages: 1 2 3