This is still work in progress
General
This has been a bit of a debate. How much does the CPU actually affect the scaling of our nice GPUs.
This is not about how SLI scale, but how the CPU itself affect the how the GPU runs.
Some say (and I among them) that the GPUs, especially in multi-GPU settings, are being majorly affected by the CPU scaling.
Meaning that if you run your CPU in stock speed vs high OCed speeds, it will affect your FPS in a pretty major way.
This is also the reason why some (and I included), frown when we see a benchmark test of a GPU, where the CPU is hardly OCed, or even at stock.
So for all of us, here is a review and maybe another way to see if we are right, or wrong.
Method
General:
So, what I did was spend the time running several DX9, DX10 and DX11 benchmarks, of certain games that I own, including "artificial" benchmarks like 3DMark or Heaven.
I wanted at first test with the 980x, and later with a 920, but currently I don't have enough time, and each set of benchmarks takes almost half a day to run several times to get a more accurate results.
So instead, I decided to use my every day overclock of 4.3ghz, and make it the base line, and from there just reduce the multiplier by 2.
This gave me CPU "levels" including decent overclock and up to a very low down-clock, including a no HT test.
This way, I got 4.3ghz, 4ghz, 3.6ghz, 3ghz and 2.5ghz (and the last one was also tested without HT).
Memory and Uncore speeds where completely unchanged.
Here is the picture of the test bed:
About the GPUs:
I'm going to run all the benchmarks with the GPUs at a decent OC (going to be run at core 920mhz, and memory 4400mhz).
The reason I'm doing this (and not going overboard with OC or at stock), is to try and make the GPUs strong enough to be starve for CPU time to get the data, especially in multi-gpu setups.
That way, if the CPU speed is affecting the score, the lower the CPU, the higher the GPUs are affected, and lower the score it will get.
About resolution:
As my sig suggest, I own 3 monitors, but I'm not going to test anything in surround.
I'm going to run as close to the "common" gamer setup, which is either a 1080p or 1200p setups.
So I'm going to run all the benchmarks at 1920x1200, which is slightly above the 1080p, and it will stress the GPUs enough to see the difference (if any).
About benchmarks settings:
I'm going to run as high settings as I can regarding textures and quality without causing 1 FPS issues in single GPU.
AA I am not going to try to run the highest, and most likely I will settle between 2 and 8, depends on the game.
The reason is that I plan to run the games with a single GPU as well. And I don't want a single frame feast in a benchmark when I run it single card, and to make the tests as much similar as I can.
Also, I don't want to get to a spot where GPU vram is limiting the benchmark and making false results.
But I'm also not going to run it without AA and reach the 200fps every time.
The benchmarks
So, what am I going to test?
I wanted several games, from DX9 to DX11.
Sadly I don't own that many games with built-in benchmarks or have the time to play each game, test with fraps and so on.
So I'm just using games with known benchmarks, and some "artificial" benchmarks to test capability.
In the future I might add a few games if there is enough request, and if I can get the game for cheap (
).
But the games I add include several types of engines, so no worries there.
DX9 Games
Crysis 2:
Crysis warhead:
Dirt 2
3DMark 11: Performance run, GPU score only
3DMark Vantage: Performance run, GPU score only
Heaven 2.5: Tessellation Extreme, 2xAA, 16xAF.
Results
Note again, that this is a work in progress thread, as benchmarks takes time to run, so I will update it as I go, as some tests are missing due to inconsistency which I wanted to retest later
And now, after all this talk and explaining, here are the test results (*drum rolls*
):
Conclusion
Well as you can see, the results are pretty much plain and simple.
CPU does affect the FPS. This is clear.
But
To single card rig, the CPU speed does not affect almost at all from the tests done here.
And of course, this is majorly affected by the game engine.
Some games, like Metro 2033 or Crysis 2, which have quite a new game engine, the GPUs and FPS isn't very much affected by CPU speeds when it comes to Dual-GPU.
Others are affected, but above the 3.6, its not a very high difference.
As said, its really game dependent. You can see that Crysis Warhead has major differences in almost all the CPU speeds. Same with Mafia 2, but not as high once you pass the 3.6
For Tri-SLI or Quad-SLI, here is the biggest difference.
You can get up to 20 or 30 FPS difference, which will greatly affect any type of surround or large monitors.
(Scaling is another matter between Tri and Quad. I plan a review on that later on).
So over-all, the difference is noticeable.
And this is the reason why when giving a review, especially multi-GPU one, its required that you push the CPUs as much as possible to make sure that the GPUs are in no way starve for CPU time.
Even a 980x can easily starve a GPU. Let alone a 920 or 965 with mild overclock.
I'm not sure about the SB current top CPUs like the 2600K or 2500K, as both can be OCed a lot higher even on air. But you will have to reach that in order to not having them starve your GPUs in anyway.
For those with 2x Dual-GPUs, the scaling effect of the CPU is massive.
Especially from the PCIE and from CPU.
About HT off
What Hyper-Threading does is split each core to two threads, so the OS has the option to run more threads at the same time.
But, it has its drawbacks.
Massive-single threads or too heavy threads, will benefit the less power the CPU has to share between the threads.
This is the reason why at 2.5 with HT off, we get 6 strong cores vs 12 weak threads.
So the threads handling the drivers are stronger and faster to transfer the data to the GPUs, over running in weaker threads.
This result is fine.
And what it says is that on weak CPUs, or stock CPUs, its better to run with HT off, as games actually do not benefit from the HT.
I plan to add HT off for 4.3 to see results there on 6 vs 12 threads.
I know that I did no tests at 2560x1600, or surround, but as the tests were not about GPU scaling but effect of CPU overclock, I don't plan to do any any time soon.
I hope you enjoyed this little review.
If there are any errors, you are welcome to let me know
If any tests remain, I will complete them in the next few days as I get more time on my hands.
Cheers.
General
This has been a bit of a debate. How much does the CPU actually affect the scaling of our nice GPUs.
This is not about how SLI scale, but how the CPU itself affect the how the GPU runs.
Some say (and I among them) that the GPUs, especially in multi-GPU settings, are being majorly affected by the CPU scaling.
Meaning that if you run your CPU in stock speed vs high OCed speeds, it will affect your FPS in a pretty major way.
This is also the reason why some (and I included), frown when we see a benchmark test of a GPU, where the CPU is hardly OCed, or even at stock.
So for all of us, here is a review and maybe another way to see if we are right, or wrong.
Method
General:
So, what I did was spend the time running several DX9, DX10 and DX11 benchmarks, of certain games that I own, including "artificial" benchmarks like 3DMark or Heaven.
I wanted at first test with the 980x, and later with a 920, but currently I don't have enough time, and each set of benchmarks takes almost half a day to run several times to get a more accurate results.
So instead, I decided to use my every day overclock of 4.3ghz, and make it the base line, and from there just reduce the multiplier by 2.
This gave me CPU "levels" including decent overclock and up to a very low down-clock, including a no HT test.
This way, I got 4.3ghz, 4ghz, 3.6ghz, 3ghz and 2.5ghz (and the last one was also tested without HT).
Memory and Uncore speeds where completely unchanged.
Here is the picture of the test bed:
About the GPUs:
I'm going to run all the benchmarks with the GPUs at a decent OC (going to be run at core 920mhz, and memory 4400mhz).
The reason I'm doing this (and not going overboard with OC or at stock), is to try and make the GPUs strong enough to be starve for CPU time to get the data, especially in multi-gpu setups.
That way, if the CPU speed is affecting the score, the lower the CPU, the higher the GPUs are affected, and lower the score it will get.
About resolution:
As my sig suggest, I own 3 monitors, but I'm not going to test anything in surround.
I'm going to run as close to the "common" gamer setup, which is either a 1080p or 1200p setups.
So I'm going to run all the benchmarks at 1920x1200, which is slightly above the 1080p, and it will stress the GPUs enough to see the difference (if any).
About benchmarks settings:
I'm going to run as high settings as I can regarding textures and quality without causing 1 FPS issues in single GPU.
AA I am not going to try to run the highest, and most likely I will settle between 2 and 8, depends on the game.
The reason is that I plan to run the games with a single GPU as well. And I don't want a single frame feast in a benchmark when I run it single card, and to make the tests as much similar as I can.
Also, I don't want to get to a spot where GPU vram is limiting the benchmark and making false results.
But I'm also not going to run it without AA and reach the 200fps every time.
The benchmarks
So, what am I going to test?
I wanted several games, from DX9 to DX11.
Sadly I don't own that many games with built-in benchmarks or have the time to play each game, test with fraps and so on.
So I'm just using games with known benchmarks, and some "artificial" benchmarks to test capability.
In the future I might add a few games if there is enough request, and if I can get the game for cheap (
But the games I add include several types of engines, so no worries there.
DX9 Games
Crysis 2:
- Adrenaline benchmark
- Extreme
- 4xAA
- Map: Default
- Built in benchmark
- Highest settings
- AA on
- Physx off
Crysis warhead:
- FB benchmark
- Enthusiast settings
- 4xAA
- Map: ambush - time of day 10
Dirt 2
- Internal benchmark
- Ultra settings
- 8xMSAA
- Built-in benchmark
- Very high settings
- 4xMSAA
- Physx off, Tessellation on, DoF off.
3DMark 11: Performance run, GPU score only
3DMark Vantage: Performance run, GPU score only
Heaven 2.5: Tessellation Extreme, 2xAA, 16xAF.
Results
Note again, that this is a work in progress thread, as benchmarks takes time to run, so I will update it as I go, as some tests are missing due to inconsistency which I wanted to retest later
And now, after all this talk and explaining, here are the test results (*drum rolls*
Conclusion
Well as you can see, the results are pretty much plain and simple.
CPU does affect the FPS. This is clear.
But
To single card rig, the CPU speed does not affect almost at all from the tests done here.
And of course, this is majorly affected by the game engine.
Some games, like Metro 2033 or Crysis 2, which have quite a new game engine, the GPUs and FPS isn't very much affected by CPU speeds when it comes to Dual-GPU.
Others are affected, but above the 3.6, its not a very high difference.
As said, its really game dependent. You can see that Crysis Warhead has major differences in almost all the CPU speeds. Same with Mafia 2, but not as high once you pass the 3.6
For Tri-SLI or Quad-SLI, here is the biggest difference.
You can get up to 20 or 30 FPS difference, which will greatly affect any type of surround or large monitors.
(Scaling is another matter between Tri and Quad. I plan a review on that later on).
So over-all, the difference is noticeable.
And this is the reason why when giving a review, especially multi-GPU one, its required that you push the CPUs as much as possible to make sure that the GPUs are in no way starve for CPU time.
Even a 980x can easily starve a GPU. Let alone a 920 or 965 with mild overclock.
I'm not sure about the SB current top CPUs like the 2600K or 2500K, as both can be OCed a lot higher even on air. But you will have to reach that in order to not having them starve your GPUs in anyway.
For those with 2x Dual-GPUs, the scaling effect of the CPU is massive.
Especially from the PCIE and from CPU.
About HT off
What Hyper-Threading does is split each core to two threads, so the OS has the option to run more threads at the same time.
But, it has its drawbacks.
Massive-single threads or too heavy threads, will benefit the less power the CPU has to share between the threads.
This is the reason why at 2.5 with HT off, we get 6 strong cores vs 12 weak threads.
So the threads handling the drivers are stronger and faster to transfer the data to the GPUs, over running in weaker threads.
This result is fine.
And what it says is that on weak CPUs, or stock CPUs, its better to run with HT off, as games actually do not benefit from the HT.
I plan to add HT off for 4.3 to see results there on 6 vs 12 threads.
I know that I did no tests at 2560x1600, or surround, but as the tests were not about GPU scaling but effect of CPU overclock, I don't plan to do any any time soon.
I hope you enjoyed this little review.
If there are any errors, you are welcome to let me know
If any tests remain, I will complete them in the next few days as I get more time on my hands.
Cheers.