Originally Posted by lacrossewacker
What I don't understand is how a unified memory is going to change much of anything. If you have a 680 but want more frames you purchase a titan.
if you have an i7 want want more speed, you OC it.
Why would hUMA change this? What good would combing the memory of the titan and cpu do? The CPUs and GPUs themselves have been the issue, not the latency between memory pools. Right?
The idea is to use the GPU as a co-processor to the CPU.
Tasks that are conventionally handled on the CPU can sometimes be better accelerated by a massively parallel architecture such as the ones found on current GPUs. Unified memory simply makes this easier. Being able to access the same physical memory space makes this collaboration between CPU/GPU possible, and also lowers the latency times for that memory access. Whichever processor is better suited for a task (one may be better spread across parallel GPU cores, one may be better on large X86 cores) is the one that will be used for the calculation. Current GPUs in top end APUs (8670D in 6800K) offer a maximum theoretical computing throughput of up to 650GFLOPS (as claimed by AMD). This is a rather massive amount of computing power that is currently only tapped into with 3D applications.
What AMD is trying to do with HSA and hUMA is to make this GPU/CPU utilization effortless for developers. Ultimately rather than having to code natively in OpenCL, programmers can just write high level code and let the custom HSA compilers handle the rest. This is where some applications could see XXX% speedup over those that are just compiled and run on CPU cores conventionally.Edited by Slappa - 8/27/13 at 12:03pm