Originally Posted by Ch13f121
That's not a fair comparison.
The reason is the PS4 uses slightly modified 'off-the-shelf' hardware and technology from AMD, where AMD has had years to perfect the tech they use in the PS4. AMD has been making GPU/CPU hybrids for a few years now, leading up to the tech that is in Jaguar and the FM2 socket chips, and beyond.
The PS3 used a brand new system architecture, co-developed by IBM and Sony. The PS3 was literally the first system to use Cell practically, outside of testing.
Kinda reminds me how I programmed a 512 bit virtual processor with coding format that allowed easy conversion into transistors by an auto board designer program. (Aka no unnecessary indirection, no silly loops and long code that described it properly.) Any HW designer that has brain must be able to write correctly stuff that has not been made before and he has no experience with it.
The problem with Cell was they thought games are only about graphics, and part of Cell was basically highly programmable GFX card. The trouble is games are using a lot of CPU power, and Cell simply didn't deliver. Basically Cell looks like ARM, energy efficient, but it has it's quirks. Both Intel and AMD were much better.
Originally Posted by Bit_reaper
Not exactly. Its about both the GPU and CPU having direct access to the same memory and the ability to run compute on the GPU. As the memory is shared and the CPU/GPU has an fast interconnect its possible to run all sorts of extra functions on the GPU. What this allows is for more stuff to go back to the CPU.
Well one of advantages of current GFX cards is they are async which means GPU write will not stop CPU from accessing the RAM. When both CPU and GPU are accessing the RAM as a shared object, then there are race conditions.
Well lets say you wanted to realistically simulated shrapnel damage in a game. Calculating the physics for hundreds of shrapnel pieces would require lots of CPU power and as getting stuff back for the GPU takes to much time so you can't reliably offload it to the GPU. Conclusion realistically simulated shrapnel damage is not possible on current gaming PC's.
But due to the way the PS4 is made it might be possible to make it work. So we might actually see a big push in GPU accelerated physics with the next gen titles.
You can use CPU for shrapnel simulation. In fact game 7.65 (sequel to brigade E5) did exactly that. The problem is when you want to have numerically accurate data, aka data where a cube would stay cube even after iteration n. 30, you need doubles. Doubles perform poorly on GFX cards.
Modelview space and simplified object spaces are often separated, because Modelview needs only local data in high detail, and the rest of the game mostly needs whole world, which means there must be some highly efficient map which will not require several GB of RAM. (Problems begins when a correctly made data map needs 3+ GB of RAM.) Basically GPU can stay in it's own space, and possible delay simply doesn't happen because of caching ahead. CPU is solving also its own stuff.
Not that majority of game developers would be able to write something as complicated because the new stuff from universities could at most write in LUA, instead of designing and writing its own algorithms. A lot of developers in industry who would be otherwise able to invent theirs own stuff, lost theirs skills because of excessive use of third party libraries and scripting. Basically industry matured, and developer abilities went AWOL.
They have it wrong. That 20GB/s is for bypassing caches, it's not about talk between CPU and GPU directly. The data still move into RAM, they just guarantee they gets into RAM with low latency and without creating possible problems with different versions in CPU and GPU caches, which should reduce number of synchronization problems. It introduces random looking hard to debug rare synchronization problems, which would happen for different reasons.
But hey, everyone likes PS4 freezing after a few hours of use.