Originally Posted by Particle
Originally Posted by KyadCK
The PS4 also uses an APU...
Again, the latency of talking over PCI-e is bad
. I don't see what's hard to understand about this.
Also, the CPU has it's own RAM, and the GPU has it's own ram. HUMA stands for "Heterogeneous Unified Memory Architecture
". What part of a CPU using a GPU's RAM sounds like it's unified?
The "unified" part is referring to the system's address space which is something that would still be a bonus for programming GPGPU apps with discrete cards. You appear to mean to talk about the "homogenous" part which is a valid complaint, but memory managers could certainly be made smart enough to intelligently allocate space in a way that makes sense (ie a CPU thread allocates a chunk of available memory and that memory is chosen to be in system memory instead of GPU memory if available but bleeds over if not). We already see this now with other UMA-themed memory technologies like NUMA as used in multi-socket AMD systems.
Right, so how do you plan to solve the PCI-e latency issue that you completely ignored?
Oh, and PCI-e can just barely provide enough bandwidth to even tie with RAM, let alone be faster. GDDR5's bandwidth does not apply over the 8 or 16 bit bus that is the GPU's connection to the system.
Not to mention overhead that has to be accounted for, the extra jumps it takes to get there (VRAM -> GPU -> NB -> HT -> CPU/NB -> Cache -> CPU vs RAM -> IMC -> Cache -> CPU), the speed of HyperTransport which cuts down the bandwidth even more, the fact that going over HT means waiting in line with everything else that needs to be talked to, and so on.
There is less VRAM then there is System RAM, even today, and GDDR5 already has worse latency even without having to jump through hoops to get there. VRAM GDDR5's speed is completely negated by the protocols needed to get it where it needs to go. If you need more speed, VRAM hurts
Oh, and I was wrong, it's "Heter
Memory Access". One memory source for all things, not
unified address space. We've had that for years now. Lets take a look:SOURCE
HUMA: Combined memory for the CPU and GPU.
HSA: Programing to make using the GPU half easier.
So, who still thinks trying to get the CPU to use a GPU's VRAM is a good idea and can actually back up their statement with fact that wouldn't make it a worse alternative to just using system RAM.