The shader clock had the least impact.... you would expect that to be the opposite. I never realized that GDDR3 was such a bottleneck here, because memory clocks really took a frame rate plunge when underclocked. Interesting...
This article is another installment in the ongoing investigation of bottlenecking in video cards, with the first installment having been done on a 8800 Ultra.
To quickly recap on the basic premise: nVidia hardware allows separate core, shader and memory clocks. Therefore, underclocking one of these values while leaving the rest at stock allows you to find out what part of the GPU is the biggest bottleneck in any given situation.
Thereâ€™s a limit to how far the core and shader clocks can separate from each other and in the case of the GTX260+, itâ€™s quite a small drop of 1242 MHz to 1153 MHz, or about 7%. This is much smaller than the 8800 Ultraâ€™s possible drop of 19%, but it should still be possible to see some kind of differences.
Iâ€™ll run the card at stock speed followed by underclocking each clock by 7%, leaving the rest are at stock, to see which has the most impact on performance. In addition to using a different video card this time, Iâ€™ll also use a different selection of games with more modern characteristics overall than the last batch of games.
The stock clocks are as follows: core 576, shader 1242, memory 999.
The underclocked values with the corresponding colors are as follows: core 535, shader 1153, memory 927, each with an underclock of approximately 7%.
Originally Posted by Ihatethedukes
It's all done at a 'low' resolution. To be expected I think. More pixels requires more shader power in a more direct manner than more AA, which has already been established as a memory heavy operation. Makes sense to me.
EDIT: Furthermore, it's done on four games that might just be more memory intensive that others. More are needed to get a real picture of what the card is limited by more than just a given graphics engine. It's obvious there that it varies between engines.
FURTHERMORE... it's done on a 3.0GHz dual core cpu..... a little CPU limited if you ask me. Especially with a 0xAA. Hell, at 1920x1200 I've CPU limited in a number of new games at 4xAA
EDIT EDIT: Lastly, declocking performance changes aren't necessarily = to overclocking performance changes. It was a good investigative attempt though. They REALLY need to address their CPU speed difference. Also, their differences are in tenth decimal places in some tests... hardly outside of statistical significance I think.
Originally Posted by mothergoose729
Some of these setting were at below 60fps framerates, which clearly illustrates a GPU bottleneck setting. Also, with AA enabled, I think shader power would kick it up a notch, rather then slack off. Besides, higher resolutions are even more bandwidth inhibited, because the texture sizes are so huge and have to be transfered from memory. When and if the GTX 300 series comes out (assuming they are equiped with GDDR5) I expect to see a pretty significant increase in performance. The difference isn't huge, but significant I think.