Originally Posted by EniGma1987
Originally Posted by sdlvx
Sort of, that is what I am getting at regarding Infiniband. PCIe 3.0 at under 16GB/s is not nearly enough bandwidth for HSA over dGPU.
Inifiband can offer a lot more bandwidth, enough to make something like that possible.
hang on now, I think you might have gotten ahead of yourself. Infiniband is a system interconnect and it rated in gb, or gigabits. PCI-E is an internal interconnect and is rated for GB, or gigabytes. Even if you used a massive 12x infiniband connection on the highest end system possible you only get a theoretical max throughput of 37.5GB/s with infiniband before overhead is accounted for. And that type of interconnect would cost a small fortune for 1, let alone using it a few times, as well as the amount of space those controllers will take up for that many infiniband connection channels. More often you will see a midrange solution that is actually affordable and it will run about 40-50 gigabit speed, which is only 6.5 GB/s at theoretical best.
Which is why I said it would be a ways off. I know there are 8 bits to a byte, so all your calculations are right. But regardless even current Inifiband bandwidth is much faster than PCIe 3.0. But AMD would need to devise a better controller and they would still need more bandwidth.
But just an FYI, AMD is not the only one looking to do things like this. Intel is leaving PCIe 3.0 behind for QPI direct to Knight's Landing to share memory: http://www.realworldtech.com/knights-landing-details/2/
2133mhz DDR3 is about 17GB/s. Dual channel already would bury PCIe 3.0. Current infiniband would at least be able to cope with it.
Which of course is why I was mentioning Hypertransport over Infiniband or Inifiniband over Hypertransport to Seronx earlier, because AMD would not directly be using Infiniband, but some sort of alternate version of it.
But the company that comes up with a working implementation of main processing unit and additional add in boards all sharing memory and working together very well will be the winner of several HPC contracts and will more than likely establish themselves as the workstation platform.
The goals of HSA and Mantle (contrary to what everyone has been saying, Mantle is a complete god send for people who do 3d work, from running GI calculations to improving 3d viewport performance) seem to align very well with what that type of platform would be useful for.
To me it seems almost assured that something like this will eventually come out of AMD. There is no way they could use sub $170 APUs as their high end product for the rest of the company's life. There is a ton of money to be made from the professional and HPC market, and more often than not vendors like Nvidia and AMD find themselves adding an extra zero to the cost of the product just because it's a FirePro/Opteron/Quadro/Tesla/etc.
I simply don't think that AMD would be pushing HSA and Mantle so gamers get higher frame rates and your budget APU that's less than half the speed of a high end GPU can go twice as fast when decoding jpegs.
That just seems like a lot of wasted potential and a lot of wasted profits, doesn't it? If I were the CEO, I'd be telling those engineers they better come up with something to get HSA and Mantle into the lucrative professional market as soon as possible. One large university or for profit organization that needs massive compute performance ordering 10,000+ AMD products at professional rates would be quite massive compare to AMD trying to get OEMs to use their products in laptops and phones.