Overclock.net - An Overclocking Community - Reply to Topic
Thread: [WCCF] NVidia next generation GPU codename HOPPER. Reply to Thread
Title:
Message:

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in


  Additional Options
Miscellaneous Options

  Topic Review (Newest First)
11-26-2019 07:36 PM
skupples amd used to use it for xfire, no?
11-26-2019 07:24 PM
guttheslayer
Quote: Originally Posted by EniGma1987 View Post
Getting ready for MCM GPUs is probably why Nvidia just added checkerboard frame rendering for multi-GPU and got it working on all DirectX versions. They most likely plan on using such a rendering technique to split the frame up similar to how they do now and distribute the pieces across all GPU die resources which can take advantage of an MCM design.
I once speculated that chessboard style rendering (a form of split frame rendering) is the best way to split the workload as evenly as possible across multi-GPUs (also combined VRAM utilisation). Didnt know I turn out to be right.
11-25-2019 11:34 AM
EniGma1987
Quote: Originally Posted by guttheslayer View Post
https://wccftech.com/nvidia-hopper-gpu-mcm-leaked/
This is what I kinda anticipated at the start once we realised RT Cores is just occupying too much die space. If we want RT and Razerisation all progress at the same time at a meaningful pace, MCM is the only way to go.
Getting ready for MCM GPUs is probably why Nvidia just added checkerboard frame rendering for multi-GPU and got it working on all DirectX versions. They most likely plan on using such a rendering technique to split the frame up similar to how they do now and distribute the pieces across all GPU die resources which can take advantage of an MCM design.




Quote: Originally Posted by skupples View Post
if I remember correctly, NV added a company dedicated to interconnect to their portfolio not that long ago. so its coming sooner or later
this deal - https://nvidianews.nvidia.com/news/n...or-6-9-billion
That stuff is for external interconnects, nothing to do with MCM stuff. It is a necessary part of supercomputers and Nvidia had to buy the company because 1) they use Mellanox Infiniband for connecting Nvidia GPUs in supercomputers, and 2) If Intel had managed to buy them instead Intel was planning on discontinuing all Infiniband products so there was no competitor to their own Omni-Path.








Quote: Originally Posted by Raghar View Post
You can burn damaged parts by laser, and make chip to allow full functionality after laser removes the more damaged pathway. With proper automation and volume, the additional step is fairly cheap.

When chip is designed to allow slicing to multiple smaller functional chips, even one massive damage in upper right corner, still allows for one mid end chip, and one low end chip. Or three low end chips.

There are multiple ways how to reduce problems with yields of large chips. Some of them are simple like: Lets assume 2080 Ti has yield of 5/100, however sales figures shows that only <3/100 of sold graphic card are using 2080 or 2080 Ti. Cutting 95/100 that didn't make it into smaller dies would allow for both getting enough 2080 + 2080 Ti cards AND meeting demands for cheaper cards.

Yields are typically decent enough for NVidia to sell 1200$ cards without needing to do some shenanigans. (Then again, Jensen leather coats are expensive, and NVidia likes its profit.)
Cant do that when the big chip doesnt share a die with any smaller chips and there is no GDDR memory controllers on it.
Also on your 2080Ti example, you cant do that as efficiently as you think since it would really just be a huge wasted die, as you cannot cut the front or back ends in half and have two working chips. You can only cut out portions of the cores, cache, memory controllers, or a group of render pipelines to make smaller chips. There is no way to cut things off a bigger GPU chip and make two functional chips out of it.
11-25-2019 07:53 AM
skupples ^^ is why the only thing I collect are old PC games. Some day the apoc will come, the net will turn off, n we'll be left with the GOATs like Half Life and Black & White series.
now to just build a win 98 gaming computer.
11-25-2019 07:49 AM
Defoler
Quote: Originally Posted by DNMock View Post
As long as I have the option to install the game and run it traditionally still, I have zero issues with streaming game services.

My biggest concern would be the death of modding. Just something magical about seeing a model or texture you personally designed in a game. I don't think that's the real end-game plan though, considering EPIC who seems to be all about pushing streaming, is giving away free access to UE4 and the megascans library. That's a ton of money to invest into a community you intend to make go away in the near future.
My concern will be ownership.
You use a service, if you get banned, stop using that service, service issues, whatever, you are locked out of what you bought in that service.
And with games stop being single ownership, and if you move from one service to another you have to buy everything anew, it will be much harder on the pocket to own a game.

I mean, I still own the original FF7 disks from back then. I can always pop it into my PC and play it. If I had it via steam and for any reason steam doesn't work or my account get suspended, it would be gone.
11-24-2019 11:25 PM
guttheslayer
Quote: Originally Posted by blodflekk View Post
WCCF? Meh. I'll wait for something real from someone reliable. Besides I sunk way too much money into my 2080Ti after my 1080 died, no way will I be upgrading for a while.
There wont be any impact if it turn out to be fake. It is just a name for a new successor to Ampere and also rumor to adopt an existing technology that been used for a while.


Nothing spectacular if you ask me, except maybe its the first adoption of a 2.5D stack for GPU? Even it not adopting MCM the new gen will be faster than Ampere anyway.
11-24-2019 09:03 PM
rluker5
Quote: Originally Posted by skupples View Post
wait, doing what made Andromeda enjoyable?
Went to nexusmods and got that one guy a haircut and then he was a polite kind of British guy. Also got rid of that black smear on that chick so she was flaky crazy and not angry crazy, and switched the hair to change the other from a pseudo butch to a cranky mom. Just cosmetic stuff made most of the tude go away. I didn't change how it plays or anything. And used sli on that one

But nexusmods looks like it is down right now so that would make it difficult.
11-24-2019 08:17 PM
m4fox90
Quote: Originally Posted by skupples View Post
wait, doing what made Andromeda enjoyable?
Closing the program and playing something else.
11-24-2019 07:40 PM
skupples
Quote: Originally Posted by rluker5 View Post
Even those of us that only use mods others have made would be missing out. Mass Effect Andromeda was a great example of this. You had a bunch of squadmates that were made to look like they were changing their appearance to express some nonsensical chips on their shoulders. A couple texture mods and all of a sudden they were pleasant and polite. Made the game much more enjoyable for me.
wait, doing what made Andromeda enjoyable?
11-24-2019 07:01 PM
blodflekk WCCF? Meh. I'll wait for something real from someone reliable. Besides I sunk way too much money into my 2080Ti after my 1080 died, no way will I be upgrading for a while.
This thread has more than 10 replies. Click here to review the whole thread.

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off