Overclock.net banner

1 - 20 of 22 Posts

·
Registered
Joined
·
619 Posts
Discussion Starter #1
Nvidia made a bunch of announcements at GTC 2018. I really want the Quadro GV100, but I'm sure it is at least $10K. Oh yeah, i also want one of those $399K DGX 2 systems lol.

Nvidia CEO Jensen Huang made his keynote presentation here at GDC 2018 in San Jose. Huang announced the company's new DGX-2 system, which Huang calls the "World's Largest GPU." He also unveiled a new Quadro GV100 GPU, a 32GB V100 GPU, and Nvidia's new NVSwitch technology, which allows up to 16 GPUs to work in tandem. Huang also outlined advances in several key areas, such as deep learning and autonomous driving.
http://www.tomshardware.com/news/nvidia-gtc-2018-v100-nvswitch,36748.html
 

·
Premium Member
Joined
·
6,675 Posts
wording is a little off. He announced the Quadro GV100, which consists of two V100 GPUs that together have a combined memory of 32GB. It communicates over the new NVLink2, and combines them in a way that the host computer sees a single large GPU, rather than two small GPUs in a sort of SLI mode. The DGX-2 system is also not what he said was the worlds largest GPU, that is simply the new system itself. The worlds largest GPU is the new Quadro GV100, which consists of the two V100 dies.
 

·
Registered
Joined
·
619 Posts
Discussion Starter #6
I think you might be mistaken. The article has a picture that calls the DGX-2 the worlds largest GPU, not the GV100.
 

·
Performance is the bible
Joined
·
7,134 Posts
$400,000, still can't play crysis.
But most likely it can render crysis world completely in 60fps at max settings in real time.


Also note that as someone else noted in another place, a rack full of the of the DGX-2, will be 10th on the supercomputer list.... Just consider a room full of it.
 

·
Premium Member
Joined
·
6,675 Posts
I think you might be mistaken. The article has a picture that calls the DGX-2 the worlds largest GPU, not the GV100.

This:




Is not a GPU, that is a whole system. It has a lot of GPUs in the system. If the article says that is a GPU then they are wrong as I said before. Having watched the keynote and knowing what was said, the wording of the article is quite off.






This:



Is the actual GPU, and it clearly says it is called the Quadro GV100, Not a DGX2
 

·
Still here...
Joined
·
3,296 Posts
...but will Watercool make a water block for it, that's the question :)
 

·
Registered
Joined
·
619 Posts
Discussion Starter #11
You are looking at the wrong graphics in the article apparently. It is the picture with the words "THE WORLDS LARGEST GPU" with Jensen standing in front of it. Hard to miss.
 

·
Premium Member
Joined
·
6,675 Posts
You are looking at the wrong graphics in the article apparently. It is the picture with the words "THE WORLDS LARGEST GPU" with Jensen standing in front of it. Hard to miss.
I suppose we are both right and it just depends which way you are trying to look at it.

The system features a total of 512GB of HBM2 memory that delivers up to 14.4TB/s of throughput. It has a total of 81,920 CUDA cores.
The GV100 is the worlds largest single GPU, which consists of two V100 dies for 10k+ CUDA cores in a single GPU. The DGX-2 is the worlds largest GPU system, with a total of 16 of those GV100 cards aggregated together. And when Jensen was on stage announcing the Quadro GV100, he picked it up and said "The worlds largest GPU, we painted it gold cause it is special". But since the DGX-2 aggregates them all together just like the single GPUs do, it too could *technically* be considered a gigantic GPU.
 

·
⤷ αC
Joined
·
11,240 Posts

·
Smug, Jaded, Enervated.
Joined
·
1,278 Posts
When will we start to see mainstream products making use of NVLink2 is what I want to know? I, and I am sure there are many others, want to run an mGPU setup that registers as only one GPU in the system, and I have been internally hyped at the concept of this for awhile.

AMD need to hurry up with Navi, so we can see some Infinity Fabric based mGPU set-ups, because I imagine the widespread adoption of NVLink2 in enthusiast motherboards wont be happening anytime soon?
 

·
Performance is the bible
Joined
·
7,134 Posts
Or you can just get a Titan V....
https://www.amazon.com/NVIDIA-TITAN-VOLTA-12GB-VIDEO/dp/B078G1VHYN

And save a few grand on a cutdown version....
There are some draw backs from using a titan.
Software support, features only available on the tesla or quadro drives, nvidia support for enterprise etc.
Also the DGX-2 is a whole system, fully nvlink2 integrated, which the titan v doesn't have.


When will we start to see mainstream products making use of NVLink2 is what I want to know? I, and I am sure there are many others, want to run an mGPU setup that registers as only one GPU in the system, and I have been internally hyped at the concept of this for awhile.

AMD need to hurry up with Navi, so we can see some Infinity Fabric based mGPU set-ups, because I imagine the widespread adoption of NVLink2 in enthusiast motherboards wont be happening anytime soon?
I doubt we will ever see enthusiasts motherboards with nvlink2.
It requires both CPU and GPU support, and it is not really compatible to PCIE, so it is not like you can swap that place for AMD hardware. And I think both nvidia and intel will want to keep it for now to enterprise as it will be a selling point over AMD for example.
 

·
Registered
Joined
·
312 Posts
NVLink-2 treating two GPU's like a large single gpu is by far the biggest news for consumers.

$600 for universally applicable SLI with nearly 100% perfect scaling is something I and many enthusiasts would gladly spend!
This. We may see Nvidia product with more chips on single PCB acting as one before Navi. Next x80 Ti gaming card might be 2 x80 chips (Gx104s) connected with NV-Link to pretend to be a single big GPU, instead of actual big GPU, which would remain reserved for Teslas, Quadros and maybe Titans.
 

·
Premium Member
Joined
·
6,675 Posts
When will we start to see mainstream products making use of NVLink2 is what I want to know? I, and I am sure there are many others, want to run an mGPU setup that registers as only one GPU in the system, and I have been internally hyped at the concept of this for awhile.

AMD need to hurry up with Navi, so we can see some Infinity Fabric based mGPU set-ups, because I imagine the widespread adoption of NVLink2 in enthusiast motherboards wont be happening anytime soon?

That is true we will not be seeing NV-Link spreading to consumer platforms as a connectivity solution. It would have to replace the PCI-E bus to the card and be integrated within the CPU. However it is possible we will see internally connected NVLink2 same as the GV100 uses in a consumer card. This is the same concept AMD has been touting with Navi for a year or two now, only with Nvidia resources they got it done faster.


Nvidia has been getting a lot of experience due to their deep learning and AI with distributed computing supercomputers. Using things like Infiniband to transfer GPU memory data between nodes. The DGX-2 even comes with eight 100gb Infiniband ports on the system for this type of purpose. Given what Infiniband is and meant for, sharing resources remotely and doing direct memory transfers between compute node hardware, and keeping things coherent between nodes, as well as allowing all the nodes aggregate power to be used by a single host application, I suspect that the new NVLink2 capability of combining multiple GPU dies and treating them as a larger aggregate die is somewhat based upon the Infiniband protocols they are already using within their ecosystem. It would amke sense as the groundwork is already there, they just needed to figure out a way to run it over their proprietary NVLink interface and protocol.

I also suspect that Nvidia's solution will perform better than AMD's will. Reason being that Infinity fabric overall is slower than NVLink2. IF *can* be fast, but requires a HUGE amount of bus lanes to do so. Using a massive 128 lanes of Infinity Fabric like Epyc does, gives you 43GB/s of bandwidth between dies. A lot, but not really a lot. memory bandwidth of a GPU is WAY higher than that, so I believe a 43GB/s link will bottleneck memory transfers anmd coherency between the cores on both dies. NVLink2 in comparison is 50GB/s, still much slower than memory bandwidth within GPUs, but that extra 7GB/s will help when these bus links are the main bottleneck point. NVLink also looks to take up 2-3x less die space than a 128-lane Infinity Fabric does, leaving more room for performance on Nvidia's side.

EDIT: sorry, just realized NVLink2 is 50GB/s per link, and uses 6 links on the GV100 card. So their speed is 300GB/s. That speed though is both ways combined, so only 150GB/s each way which is still really fast and way faster than Infinity fabric :drool::drool: IF speed was listed as each direction, so it is comparing 43GB/s for AMD to 150GB/s for Nvidia.
 
1 - 20 of 22 Posts
Top