HardwareI’m going to quote from my previous article to speed things along:
“Let’s get right to the numbers. The Radeon 8500 is a 2x/4X AGP, single-GPU card with 64 MB of 275 MHz DDR RAM. The card GPU itself is clocked at 250 MHz, as is the RAM. Using “a combination of brute force and intelligence,” according to our interview with ATI staff, they have managed to enhance every area of this card’s performance over the previous architecture.
The fundamental areas of enhancement over the previous Radeon models are as follows: the 8500 will feature a faster clock speed with dramatically increased memory bandwidth and a 2x speed increase overall according to Winbench benchmarks. The 8500 GPU is built with a 15-micron process and is more complex than a Pentium 3 and Pentium 4 chip combined – over 60 million transistors. This monster processor is coupled with Hyper-Z 2 technology and 64 MB of DDR RAM for a staggering – nay, unbelievable! – 12 GB per second memory bandwidth between the GPU and the DDR RAM.
The result? A claimed fill rate of one billion textured pixels per second. Yes, that is “billion” as in nine zeroes. That translates to 62.5 million triangles processed per second, well over twice what the already-impressive Radeon card could deliver.”
I said it six months ago, but it is still quite true.
The Test RigAll tests were done with my Quicksilver 733 Mhz system, 896 MB of RAM, Mac OS 10.1.2 and Mac OS 9.2.2; I used the latest available version of OpenGL, 1.2.2. Overall I am extremely happy with this system as a gaming rig, and there is no game that I feel is unduly limited by its performance (with the exception of Sacrifice, but that is another story). I run with a minimum extension set and all networking turned off, and VM off (under OS 9.x). I expressly did not use any OS X tweaks or “renice” utilities to enhance performance under that OS, but to minimize the effects of other applications on game peformance I ran all OS X tests right after a reboot to clear out the VM cache and task list.
The video cards tested include the Apple-built GeForce 2 MX that shipped with this system, a retail Radeon AGP 32 MB (used in my Radeon AGP review) and a production (not prototype!) version of the ATI Radeon 8500.
SmoothVision and TruFormLet’s get right to the pretty stuff. The 8500 supports hardware-assisted full-scene anti-aliasing on a per-game basis, meaning that each game must be patched to enable FSAA support. This is not always going to be the case; I was assured repeatedly by ATI engineers that their goal in the future is to have FSAA settings included in the ATI Displays control panel.
A generation past 3dfx’s brute-force approach to FSAA, SmoothVision uses “multisampling” techniques to greatly reduce the overhead associated with AA calculation, thus avoiding the catastrophic drop in frame rate that the 3dfx cards exhibited. While the V5 smoothed the screen by literally drawing it four times and averaging the results, the smarter multisampling methods use formulas to “sample” a random subset pixels on the screen and their difference in color from surrounding pixels, and then average those results.
Not only does this multisampling make for sharper, less “blurry” results, but in my own preliminary tests both 2x and 4x FSSA have almost no appreciable effect on average frame rate. You read that correctly, there is no loss of speed whatsoever at resolutions below 1024x768. At 1280x1024 and above, there is no point in using FSAA, as the naturally blurry nature of the phosphor on your CRT would hide any AA effect.