Originally Posted by skupples
Denver + stacked = biggest breakthrough since the DGPU was invented.
If they design things to be bandwidth-bottlenecked rather than latency-bottlenecked, they could simply slightly underclock things to keep things cool.
Remember the days of single-core Pentium 4 and the 3.8 gigahertz CPU's. Most multi-core CPU's being sold today don't even clock that high.
So in that sense, stacking can speed certain things up (bandwidth) while dialing back the clocks a little bit, to keep things cool.
That may add a slight latency if you slow the clocks a bit. Latency is extremely critical (e.g. shaper processing, GPGPU processing, etc) and raising latencies has a cost. On the other hand, simple raw throughput (e.g. high-res textures on 4K displays!) will hugely benefit from this!
I would bet that the latency issues can gradually be optimized over time, in the same sense that lower-MHz CPU's today execute instructions, even in a single thread, using far less power, and more efficiently per clock, than the 3.8GHz Pentium 4's ten years ago. Although optimizations in the memory arena, follows a much slower "Moore's Law" equivalent than for CPU's/GPU's.Edited by mdrejhon - 3/26/14 at 9:13am