Originally Posted by gofasterstripes
Could it be that that is, perchance, a little out of date?
Studios have had a lot of time to get the hang of multicore scaling now :-)
"05/03/2015 at 21:53 Jeroen D Stout says:
As a developer working on a CPU-heavy game, I have to say that 4 cores are only arguably enough as a consumer-anno-2015 because most developers are not trying to use more than 4, because that is what consumers-anno-2015 are expected to have.
Yes, it is right to say often rendering is a huge bottleneck. However, when it comes to high-quality AI, high-quality physics with super-high frame-frames (some significant resonances need hundreds of steps per second), volumetric effects or large areas, things like enormous amounts of agents with non-flock behaviour (like citizens as opposed to pedestrian concepts), arbitrary crunch calculations or even just high-duty ray-tracing… then the actual argument of 4 cores is a bottleneck to future games.
Just to underline – yes, in a world where game development is mostly elaborately scripted things which need to run on outdated hardware with more GPU load than CPU, upgrading your PC to > 4 cores is probably a waste of money. I agree with this purely in the limited and depressing space of what the market is now. However, in my personal world in which games strive for interesting scenes high computational complexity, this mindset is holding back the sort of games in which which mass parallel computation could change the very boundaries of what we can do."
- [An article, rather lacking in actual substance, that]
Here's some terrible coding [Top work, AMD :/ ]
"I think we found a root core of the performance issues, they seem to be related to the number of CPU cores. It is mind boggling to see that with the Radeon the 1 core setup remains to be the fastest. Here you will also notice that with a 2 core processor setup we have micro-stuttering going on whereas the 1, 4 and 8 core setup are fine. After seeing this it is without doubt that AMD will need to do a thing or two with driver optimizations. "
But better from NVIDIA:
Leading to a conclusion:
"Another metric we can look at is actual CPU usage as reported by the OS, as shown above. In this case CPU usage more or less perfectly matches our earlier expectations: with DirectX 11 both the GTX 980 and R9 290X show very uneven usage with 1-2 cores doing the bulk of the work, whereas with DirectX 12 CPU usage is spread out evenly over all 4 CPU cores.
At the risk of speaking to the point that it’s redundant, what we’re seeing here is exactly why Mantle, DirectX 12, OpenGL Next, and other low-level APIs have been created. With single-threaded performance struggling to increase while GPUs continue to improve by leaps and bounds with each generation, something must be done to allow games to better spread out their rendering & submission workloads over multiple cores. The solution to that problem is to eliminate the abstraction and let the developers do it themselves through APIs like DirectX 12."
And this is why I am hopeful