Originally Posted by gamervivek
It's not exactly fairly obvious.
Yes, it is. As the resolution increases, the 390X's deficit shrinks as more load is placed on the GPU while the same amount remains on the CPU. The "bottleneck" therefore shifts over to the GPU allowing the 390X machine to perform more closely to the 980.
Actually it's the opposite.
No. If it were the opposite and the 980 were having a CPU overhead issue and not the 390X, then the 980 would be exhibiting lower relative performance to the 390X at 1920x1080 and 2560x1440 than at 4K, not the way around. I don't know what you mean.
The 980 they use is 20% faster without gameworks, and 30% with it at 1080p.
Of course? Maxwell's faster at Gameworks/tesselation than GCN and takes less of a hit than GCN with it on... across all three resolutions.
That should lead one to believe that it's 980 that is suffering from CPU overhead with 'lowered' graphics settings.
No it shouldn't, for all the reasons I stated above. The data doesn't lie. It's the 390X showing the lower resolution performance deficit, the relative positioning change with Gameworks on occurs at all
three resolutions (because the 980 takes less of a hit from tessellation than the 390X), and this data falls in line with other games/benchmarks' data showing similar behavior, Eurogamer's interpretation of the situation (970 vs 390
), and PClab's interpretation of the situation (translated):
"As already could see the result in the test Boston quite heavily depends on processor performance, but worry should only holders of relatively old and slow systems. In by far the worse are the AMD Radeon card users. Significantly higher CPU load, probably generated by the graphics card driver, resulting in a much lower efficiency. The performance of graphics cards in tandem with the Core i7-4770K @ 4.5 GHz are as follows:
-GeForce GTX 980 Ti: 57.9 frames / sec.
-GeForce GTX 970: 49.8 frames / sec.
-Radeon R9 Fury X: 34.9 frames / sec.
-Radeon R9 390 34.5 frames / sec."http://pclab.pl/art66856-16.html