Originally Posted by guttheslayer
This is what I kinda anticipated at the start once we realised RT Cores is just occupying too much die space. If we want RT and Razerisation all progress at the same time at a meaningful pace, MCM is the only way to go.
Getting ready for MCM GPUs is probably why Nvidia just added checkerboard frame rendering for multi-GPU and got it working on all DirectX versions. They most likely plan on using such a rendering technique to split the frame up similar to how they do now and distribute the pieces across all GPU die resources which can take advantage of an MCM design.
Originally Posted by skupples
if I remember correctly, NV added a company dedicated to interconnect to their portfolio not that long ago. so its coming sooner or later
this deal - https://nvidianews.nvidia.com/news/n...or-6-9-billion
That stuff is for external interconnects, nothing to do with MCM stuff. It is a necessary part of supercomputers and Nvidia had to buy the company because 1) they use Mellanox Infiniband for connecting Nvidia GPUs in supercomputers, and 2) If Intel had managed to buy them instead Intel was planning on discontinuing all Infiniband products so there was no competitor to their own Omni-Path.
Originally Posted by Raghar
You can burn damaged parts by laser, and make chip to allow full functionality after laser removes the more damaged pathway. With proper automation and volume, the additional step is fairly cheap.
When chip is designed to allow slicing to multiple smaller functional chips, even one massive damage in upper right corner, still allows for one mid end chip, and one low end chip. Or three low end chips.
There are multiple ways how to reduce problems with yields of large chips. Some of them are simple like: Lets assume 2080 Ti has yield of 5/100, however sales figures shows that only <3/100 of sold graphic card are using 2080 or 2080 Ti. Cutting 95/100 that didn't make it into smaller dies would allow for both getting enough 2080 + 2080 Ti cards AND meeting demands for cheaper cards.
Yields are typically decent enough for NVidia to sell 1200$ cards without needing to do some shenanigans. (Then again, Jensen leather coats are expensive, and NVidia likes its profit.)
Cant do that when the big chip doesnt share a die with any smaller chips and there is no GDDR memory controllers on it.
Also on your 2080Ti example, you cant do that as efficiently as you think since it would really just be a huge wasted die, as you cannot cut the front or back ends in half and have two working chips. You can only cut out portions of the cores, cache, memory controllers, or a group of render pipelines to make smaller chips. There is no way to cut things off a bigger GPU chip and make two functional chips out of it.