Originally Posted by ILoveHighDPI
Actually Tenser Cores take up 2/5ths of each Turing Shader Module: https://www.anandtech.com/show/13282...re-deep-dive/4
You can see their oversimplified graphic of die space allocation at the bottom of this article: https://www.pcworld.com/article/3305...x-2080-ti.html
So it's not consuming exactly 50% of the space for Cuda Cores, just 40%.
So 40% wasted die space instead of 50%.
Originally Posted by tpi2007
Here's the thing, it might very well be that on 12nm you couldn't have all those CUDA cores on a reasonable TDP to begin with. Nvidia most probably gets away with a 250w TDP for the 2080 Ti (260w for the FE) because when you're not using RTX features a big portion of the card is idle, and when you are, the CUDA cores are bottlenecked by the RTX cores (RT+Tensor), so it was a smart way for them to manage things.
Interesting take, so Tensor cores are a way to reduce TDP and performance.
Last edited by ryan92084; 02-20-2019 at 04:54 AM.