Originally Posted by mimic58
Hmm... I see , so is this cpu quick enough to get the full potencial out of the cards on an application thats written to use all 4 cores?? or will it still bottleneck before the cards limits hit?
Edit: there mustbe a way i can squeez a bit more performance out of this thing, Even if its just a lil
The issue with 3dMark06 is not really so much that it only uses 2 cores, although it would help if it could use all 4 of course. Rather, it is that it's quite old, and the standard settings (1280x1024, no AA, 4xAF, medium graphics settings) were calibrated around the single GPU setups of it's day.
Keep in mind, when that bench came out, nobody had SLI, and it was tough to break 4000 score with the existing high-end setups.
Nowadays, a high end GPU setup (like yours) easily can run over 5x as fast.
So the actual issues here are that:
A) the FPS on this test at such easy-to-run graphical settings is now very high relative to what the test was designed to run at, and
B) three of the 4 tests (not so much Canyon Flight, but the other 3) are fairly 'CPU-dependent' to begin with. By 'CPU-dependency' I'm referring to 'the relative amount of work required of the CPU by the GPU for every frame rendered'.
This means that by running 3dMark06 at 'stock' settings, you've created a 'testing scenario' where even the most modern CPU's can't keep up with the demands made of them by the GPU's for data. That's because on this test
, it's just too easy for modern GPU's to run at these settings.
However, if you owned the full version of this test, and you went in and ran it at, say, 1920x1080, and cranked up every setting like AA/AF and whatnot, it would challenge your GPU enough that the FPS would slow down significantly (probably be cut down to like 25% of what you see now). You will have then created a different 'testing scenario', wherein your CPU would no longer have any issue keeping up with the data requests being made by the GPU's.
Bottom-line, when it comes to which part, CPU or GPU, will act as the limiting performance factor ... it's all about the specific test you're running. What is the inherent CPU-dependency of the test/bench/game, and what settings (resolution/AA/etc) are you running it at?
In general, the *lower* the IQ (image quality) settings, then the higher the FPS should be, because the GPU has less work to do per frame rendered, right? However, the higher the FPS, the more likely it becomes that the CPU will limit performance, because more work is being requested of it in the same amount of time.
Now, the point in the continuum of FPS at which the CPU limits performance describes the CPU-dependency of the test.
For example, on my system, if I test with Crysis, I will start to see CPU bottlenecking at around 60fps. This is a pretty cpu-dependent game, because that is a pretty low FPS to have cpu-limitation already kicking in, esp. given my proc is pretty badass. However, if I ran a different game, say ... FEAR for example ... I don't see signs of a CPU BN until my FPS gets up into the 200+ range. So I say this is a much less CPU-dependent test.
You follow what I'm saying here?Edited by brettjv - 12/21/10 at 11:37pm