The shader clock had the least impact.... you would expect that to be the opposite. I never realized that GDDR3 was such a bottleneck here, because memory clocks really took a frame rate plunge when underclocked. Interesting...
Introduction This article is another installment in the ongoing investigation of bottlenecking in video cards, with the first installment having been done on a 8800 Ultra. To quickly recap on the basic premise: nVidia hardware allows separate core, shader and memory clocks. Therefore, underclocking one of these values while leaving the rest at stock allows you to find out what part of the GPU is the biggest bottleneck in any given situation. There’s a limit to how far the core and shader clocks can separate from each other and in the case of the GTX260+, it’s quite a small drop of 1242 MHz to 1153 MHz, or about 7%. This is much smaller than the 8800 Ultra’s possible drop of 19%, but it should still be possible to see some kind of differences. I’ll run the card at stock speed followed by underclocking each clock by 7%, leaving the rest are at stock, to see which has the most impact on performance. In addition to using a different video card this time, I’ll also use a different selection of games with more modern characteristics overall than the last batch of games. The stock clocks are as follows: core 576, shader 1242, memory 999. The underclocked values with the corresponding colors are as follows: core 535, shader 1153, memory 927, each with an underclock of approximately 7%. |