I'm going to suggest the GTX 570 over the 6970 for this one. If you are gaming on a single monitor that is less than 1200p, the 1.25GB GTX 570 should be able to play all games on high setti gs (bar games like metro though--turn down the aa).
EDIT: lol, I was thinking about single-gpu setups. SLI GTX 570s should be able to play everything at max settings. For multi-screen setups you can get the 2.25GB variant.
Most of us know that the 6970 cf outperforms the 570 sli, but what many people don't know is that microstuttering is more evident in multi-gpu AMD setups. So you may get the higher numbers on the 6970 cf, but you will get a better visual experience with the 570 sli as the pics demonstrate below:
Gonna quote one of my previous posts
for more clarification:
(Excellent read btw)
So, the sli 580 is better than the 6990 by the numbers and the eyes.
Also, just because you have a sli/cf setup doesn't mean that microstuttering will be inevitable. It depends on the gpu, drivers, game played (major role), game settings, resolution, etc. I also read somewhere that if you setup a RAMDisk, you will see close to no microstuttering at all on nearly all graphics cards, multi-gpu or not. Like brettjv said, with cf setups you might get the high numbers, but you also get the erratic frame behavior that comes with it. This may reduce gameplay experience, but, like I said before, there are a lot of variables to take into consideration for that to happen.
There's a reason why sli scaling isn't as good as cf, and that's because they are incorporating something called frame metering in order to make gameplay experience smoother:
In fact, in a bit of a shocking revelation, Petersen told us Nvidia has "lots of hardware" in its GPUs aimed at trying to fix multi-GPU stuttering. The basic technology, known as frame metering, dynamically tracks the average interval between frames. Those frames that show up "early" are delayed slightly—in other words, the GPU doesn't flip to a new buffer immediately—in order to ensure a more even pace of frames presented for display. The lengths of those delays are adapted depending on the frame rate at any particular time. Petersen told us this frame-metering capability has been present in Nvidia's GPUs since at least the G80 generation, if not earlier. (He offered to find out exactly when it was added, but we haven't heard back yet.)
(came from source above) I hope this clarifies some stuff. If I'm wrong about something please say so because I won't be telling people false information
said on the same thread quoted above:
AMD's awesome multi-gpu scaling is not a result of superior hardware, it's a result of driver programming that doesn't bother attempting to limit microstutter like the Forceware does.
Nowadays, you can't merely purchase a graphics card mainly by the fps, you have to take into consideration the things that many websites don't go over, which in this case is microstuttering. Now, I don't know if AMD has resolved this problem with the new 7xxx series out, but we will see whenever someone takes the time to do a microstutter review on them.Edited by airisom2 - 2/17/12 at 1:56pm