[WCCF] NVidia next generation GPU codename HOPPER. - Page 6 - Overclock.net - An Overclocking Community
Forum Jump: 

[WCCF] NVidia next generation GPU codename HOPPER.

Reply
 
Thread Tools
post #51 of 54 (permalink) Old 11-25-2019, 07:53 AM
New to Overclock.net
 
skupples's Avatar
 
Join Date: Apr 2012
Location: Fort Lauderdale
Posts: 21,831
Rep: 630 (Unique: 342)
^^ is why the only thing I collect are old PC games. Some day the apoc will come, the net will turn off, n we'll be left with the GOATs like Half Life and Black & White series.
now to just build a win 98 gaming computer.

R.I.P. Zawarudo, may you OC angels' wings in heaven.
If something appears too good to be true, it probably is.
skupples is offline  
Sponsored Links
Advertisement
 
post #52 of 54 (permalink) Old 11-25-2019, 11:34 AM
New to Overclock.net
 
EniGma1987's Avatar
 
Join Date: Sep 2011
Posts: 6,400
Rep: 342 (Unique: 252)
Quote: Originally Posted by guttheslayer View Post
https://wccftech.com/nvidia-hopper-gpu-mcm-leaked/
This is what I kinda anticipated at the start once we realised RT Cores is just occupying too much die space. If we want RT and Razerisation all progress at the same time at a meaningful pace, MCM is the only way to go.
Getting ready for MCM GPUs is probably why Nvidia just added checkerboard frame rendering for multi-GPU and got it working on all DirectX versions. They most likely plan on using such a rendering technique to split the frame up similar to how they do now and distribute the pieces across all GPU die resources which can take advantage of an MCM design.




Quote: Originally Posted by skupples View Post
if I remember correctly, NV added a company dedicated to interconnect to their portfolio not that long ago. so its coming sooner or later
this deal - https://nvidianews.nvidia.com/news/n...or-6-9-billion
That stuff is for external interconnects, nothing to do with MCM stuff. It is a necessary part of supercomputers and Nvidia had to buy the company because 1) they use Mellanox Infiniband for connecting Nvidia GPUs in supercomputers, and 2) If Intel had managed to buy them instead Intel was planning on discontinuing all Infiniband products so there was no competitor to their own Omni-Path.








Quote: Originally Posted by Raghar View Post
You can burn damaged parts by laser, and make chip to allow full functionality after laser removes the more damaged pathway. With proper automation and volume, the additional step is fairly cheap.

When chip is designed to allow slicing to multiple smaller functional chips, even one massive damage in upper right corner, still allows for one mid end chip, and one low end chip. Or three low end chips.

There are multiple ways how to reduce problems with yields of large chips. Some of them are simple like: Lets assume 2080 Ti has yield of 5/100, however sales figures shows that only <3/100 of sold graphic card are using 2080 or 2080 Ti. Cutting 95/100 that didn't make it into smaller dies would allow for both getting enough 2080 + 2080 Ti cards AND meeting demands for cheaper cards.

Yields are typically decent enough for NVidia to sell 1200$ cards without needing to do some shenanigans. (Then again, Jensen leather coats are expensive, and NVidia likes its profit.)
Cant do that when the big chip doesnt share a die with any smaller chips and there is no GDDR memory controllers on it.
Also on your 2080Ti example, you cant do that as efficiently as you think since it would really just be a huge wasted die, as you cannot cut the front or back ends in half and have two working chips. You can only cut out portions of the cores, cache, memory controllers, or a group of render pipelines to make smaller chips. There is no way to cut things off a bigger GPU chip and make two functional chips out of it.


Last edited by EniGma1987; 11-25-2019 at 11:44 AM.
EniGma1987 is offline  
post #53 of 54 (permalink) Old 11-26-2019, 07:24 PM - Thread Starter
New to Overclock.net
 
guttheslayer's Avatar
 
Join Date: Apr 2015
Posts: 3,806
Rep: 111 (Unique: 65)
Quote: Originally Posted by EniGma1987 View Post
Getting ready for MCM GPUs is probably why Nvidia just added checkerboard frame rendering for multi-GPU and got it working on all DirectX versions. They most likely plan on using such a rendering technique to split the frame up similar to how they do now and distribute the pieces across all GPU die resources which can take advantage of an MCM design.
I once speculated that chessboard style rendering (a form of split frame rendering) is the best way to split the workload as evenly as possible across multi-GPUs (also combined VRAM utilisation). Didnt know I turn out to be right.

guttheslayer is offline  
Sponsored Links
Advertisement
 
post #54 of 54 (permalink) Old 11-26-2019, 07:36 PM
New to Overclock.net
 
skupples's Avatar
 
Join Date: Apr 2012
Location: Fort Lauderdale
Posts: 21,831
Rep: 630 (Unique: 342)
amd used to use it for xfire, no?

R.I.P. Zawarudo, may you OC angels' wings in heaven.
If something appears too good to be true, it probably is.
skupples is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off