Overclock.net banner

1 - 15 of 15 Posts

·
Registered
Joined
·
477 Posts
Discussion Starter · #1 ·
All of the cards have 8-10 GB except the 3090 which has a ridiculous 24 GB and the 3060 which is the weakest card yet has the 2nd most VRAM at 12 GB?

Did someone at Nvidia just draw numbers randomly out of a hat to determine the amount of VRAM each card gets?
 

·
Iconoclast
Joined
·
31,018 Posts
GDDR6X currently only comes in one size and you can have one or two ICs per channel. Number of populated channels also determines the memory bus width.

If you want a 320-bit bus, you need to populate ten channels. So you can have either 10 or 20 GiB of memory. If you want the full 384-bit bus available on GA102, you can have either 12 or 24GiB of memory, nothing in between.

The options are dictated by the realities of the hardware available, the rest is down to market segmentation. During product development NVIDIA evidently didn't think a 12GiB halo product (the 3090) would have been very marketable in light of 16GiB mainstream ones, but could not justify the cost of unnecessarily large amounts of expensive GDDR6X on an ostensibly $700 dollar card (3080).

When it comes to the 3080 Ti, 12GiB is the most sensible option for them. 12GiB is enough to not be capacity constrained in any practical sense and give the part an advantage over the 3080 in bandwidth. They could have done 11GiB with a 352-bit bus, but that would have seemed pretty stingy relative to the 3080, while 20 or 22GiB would significantly inflate the BoM.
 

·
Performance is the bible
Joined
·
7,232 Posts
GPU memory chips come in 1GB or 2GB variations (they used to come in 0.5GB but that is long dead).

Nvidia use 32bit memory controllers, and each one is controlling a single memory chip.
The 3060 for example has 6 memory controllers for a 192bit memory controller bandwidth. That gives them 2 options. Use 1GB memory chips (for 6GB of memory) or 2GB memory chips (for 12GB of memory). 6GB would be too low, so they went with 12GB.
The 3080 has 10 memory controllers. So it would be either 10GB of 20GB. 20GB would be too much, so they went with 10GB.
The 3090 has 12 memory controllers. So it would be either 12GB or 24GB. 12GB would be too little, so they went with 24GB.
Reason for 3080 having 10 and not 11 like the 1080 TI for example I expect to be price (one less chip per card saves a lot) and manufacturing (how many they can make with 12/11/10 to come out flawless out from each wafer).

AMD are in a "similar" situation.
They use 64bit memory controllers and each one control two memory chips (instead of one).
So with the 6800xt/6900xt both have 4 memory controllers which can control 8 memory chips. So they can go either 8GB or 16GB. If they wanted to match nvidia for less, they would have to use 3 memory controllers, which would give them the option for 6GB or 12GB, but, they would also have a lot smaller memory bandwidth which would hinder the cards.

This is also why for example the 1080 TI has 11GB. Because it has 11 memory controllers, so it would be either 11GB or 22GB of memory, etc.

So this is also the difference in architecture. You can't have too few memory controllers and use 2GB of memory chips, if you have too many cores and you will starve them and limit your bandwidth. You can't have too many as they might be useless and they won't really be utilized to their fullest, and you might use too many big memory chips, so the card will just cost way too much, especially as prices of memory chips go up constantly lately.

There is a lot of fine tune between bandwidth, cost and performance with each architecture, which determine the amount of memory controllers, their size, the size of memory chips and what is right and wrong for each card.
 

·
More Cores!
Joined
·
1,299 Posts
As others have said, the TL;DR is that VRAM pool size is limited by VRAM module capacity and memory bus width. Some things look better on paper (12GB 3060), until you realize that trying to fill that RAM is a lot like filling a syringe with a 34g needle. Ultimately, although I am solidly in the camp of "more VRAM is more better," whether or not you "need" the VRAM depends on your use case.
 

·
Registered
Joined
·
4,592 Posts
How much does having 24 GiB of VRAM over 12 GiB contribute to increased TDP of the PCB and GPU itself?
 

·
Top kek
Joined
·
3,603 Posts
GPU memory chips come in 1GB or 2GB variations (they used to come in 0.5GB but that is long dead).

Nvidia use 32bit memory controllers, and each one is controlling a single memory chip.
The 3060 for example has 6 memory controllers for a 192bit memory controller bandwidth. That gives them 2 options. Use 1GB memory chips (for 6GB of memory) or 2GB memory chips (for 12GB of memory). 6GB would be too low, so they went with 12GB.
The 3080 has 10 memory controllers. So it would be either 10GB of 20GB. 20GB would be too much, so they went with 10GB.
The 3090 has 12 memory controllers. So it would be either 12GB or 24GB. 12GB would be too little, so they went with 24GB.
Reason for 3080 having 10 and not 11 like the 1080 TI for example I expect to be price (one less chip per card saves a lot) and manufacturing (how many they can make with 12/11/10 to come out flawless out from each wafer).

AMD are in a "similar" situation.
They use 64bit memory controllers and each one control two memory chips (instead of one).
So with the 6800xt/6900xt both have 4 memory controllers which can control 8 memory chips. So they can go either 8GB or 16GB. If they wanted to match nvidia for less, they would have to use 3 memory controllers, which would give them the option for 6GB or 12GB, but, they would also have a lot smaller memory bandwidth which would hinder the cards.

This is also why for example the 1080 TI has 11GB. Because it has 11 memory controllers, so it would be either 11GB or 22GB of memory, etc.

So this is also the difference in architecture. You can't have too few memory controllers and use 2GB of memory chips, if you have too many cores and you will starve them and limit your bandwidth. You can't have too many as they might be useless and they won't really be utilized to their fullest, and you might use too many big memory chips, so the card will just cost way too much, especially as prices of memory chips go up constantly lately.

There is a lot of fine tune between bandwidth, cost and performance with each architecture, which determine the amount of memory controllers, their size, the size of memory chips and what is right and wrong for each card.
Nah, 0.5GB per chip is not long dead for sure.

The Polaris cards are using 256bit interface with 4/8GB of VRAM.
That makes it 0.5/1GB per chip.
The Nvidia 3GB models are the same.
 

·
Iconoclast
Joined
·
31,018 Posts
How much does having 24 GiB of VRAM over 12 GiB contribute to increased TDP of the PCB and GPU itself?
The memory on my 3080 will consume about 50-60w in memory intensive tasks and I've seen 3090s pull more than 100w on just their GDDR6X while mining (which is as much as the GPU itself on a fully tuned setup).

So, that extra 12GiB costs a lot of power, potentially.
 

·
Registered
Joined
·
2,552 Posts
In the beginning, Leather Jacket Man created 2 demigods to rule the RTX kingdom, 3090 and 3080. With their equal 17 GB GDDR6X, they were supposed to share lordship over the kingdom. But 3080, being the firstborn and the flagship, received the greater share of love and attention from Leather Jacket Man and the RTX nation. 3090 grew jealous of 3080 and his jealousy soon turned to hatred. A great battle ensued when 3090 attacked 3080 and stole 7GB of his GDDR6X. For their quarrel, Leather Jacket Man punished 3080 and 3090 by turning them into consumer graphics cards. Yet even though he created 3080 and 3090, Leather Jacket Man could not give 3080 back his VRAM, as even as for the greater there are some deeds we may accomplish but once only. The 19.5 Gb/s modules were hot and heavy on the back of the 3090, where he was forced to carry them. Yet he would not remove the modules, even though they were a great burden. And thus it came to pass that all 3080 GPUs would have 10 GB VRAM and all 3090s would have 24 GB VRAM.
 

·
Robotic Chemist
Joined
·
3,604 Posts
12GB of GDDR6X would likely have been better for everything I will ever do with my 3090, I am more power/heat limited than memory capacity limited. The 3080 Ti is looking like a well balanced option, and should make the 3090 obsolete (not better than the 3090, but few should get the 3090 instead).

Too bad 24GB > 12GB in everyone's mind (including mine). :oops:

Nah, 0.5GB per chip is not long dead for sure.

The Polaris cards are using 256bit interface with 4/8GB of VRAM.
That makes it 0.5/1GB per chip.
The Nvidia 3GB models are the same.
Wait, are you referencing long dead GPUs to dispute the claim that 0.5GB per chip is long dead? Aren't you proving the point rather than refuting it? ;)

Long dead, at lest in the world of GPUs. :p
 

·
Performance is the bible
Joined
·
7,232 Posts
How much does having 24 GiB of VRAM over 12 GiB contribute to increased TDP of the PCB and GPU itself?
According to micron, GDDR6x is 1.35v whether you use 1GB or 2GB modules.
So TDP increase will not be affected with a direct swap. What does affect the speed of the data rate. At higher data rate they consume more power, and lower data rate they consume less (think overclock/underclock).

But, the cost of modules is different.
 

·
Iconoclast
Joined
·
31,018 Posts
According to micron, GDDR6x is 1.35v whether you use 1GB or 2GB modules.
So TDP increase will not be affected with a direct swap.
A denser module pulls more current, other things being equal. So there will notable difference in power consumption/dissipation between the two parts, even at the same voltage...there are nearly twice as many transistors in a 16Gb IC than there are in an 8Gb one, after all.

NVIDIA is still using the 8Gb/1GiB parts because production hasn't ramped up on the newer, denser, ICs yet. They aren't even listed in Micron's product catalog: GDDR6X Part Catalog
 
  • Rep+
Reactions: 8051

·
Performance is the bible
Joined
·
7,232 Posts
A denser module pulls more current, other things being equal. So there will notable difference in power consumption/dissipation between the two parts, even at the same voltage...there are nearly twice as many transistors in a 16Gb IC than there are in an 8Gb one, after all.

NVIDIA is still using the 8Gb/1GiB parts because production hasn't ramped up on the newer, denser, ICs yet. They aren't even listed in Micron's product catalog: GDDR6X Part Catalog
They do list 16Gb (2GB) parts for GDDR6. So either they run out of for GDDR6X / don't produce them right now because of GDDR6 demand, or they only manufacture them for nvidia right now, so those aren't available in the catalog. AMD use GDDR6 for their 2GB modules on their top cards. I think right now only nvidia use 2GB from GDDR6X per module for the 3090. Micron are the only ones who make GDDR6X.
 

·
Iconoclast
Joined
·
31,018 Posts
They do list 16Gb (2GB) parts for GDDR6. So either they run out of for GDDR6X / don't produce them right now because of GDDR6 demand, or they only manufacture them for nvidia right now, so those aren't available in the catalog. AMD use GDDR6 for their 2GB modules on their top cards. I think right now only nvidia use 2GB from GDDR6X per module for the 3090. Micron are the only ones who make GDDR6X.
The 3090 has twenty-four 8Gb (1GiB) ICs...half of them are on the back of the card. I don't think Micron is selling any 16Gb/2GiB to anyone right now...probably still too early.
 

·
Registered
Joined
·
4,592 Posts
How difficult would it be for AMD to transition from GDDR6 to GDDR6x parts for their newest GPU's? Is all GDDR6x production going to Nvidia?
 

·
Iconoclast
Joined
·
31,018 Posts
How difficult would it be for AMD to transition from GDDR6 to GDDR6x parts for their newest GPU's? Is all GDDR6x production going to Nvidia?
GDDR6X was essentially developed for NVIDIA and Micron is the only supplier. AMD would need to update their memory controllers/PHYs (requiring a redesign) to take advantage of the PAM4 signaling GDDR6X uses. It would also require new board designs to move the memory ICs closer to the GPU and possibly more PCB layers to maintain signal integrity. On top of that it would require securing a supply of GDDR6X from Micron, who are already tapped out supplying NVIDIA, who have first dibs. All this to go from 16Gbps per pin to 21Gbps max.

It would probably be cheaper to widen the bus or use HBM2(E), if they desperately needed more memory bandwidth, which they don't...at least not on RDNA2. It would help, especially the 6900X and XTX, but there is no way the cost could be justified, even if it was just a drop in replacement, which it's not.

I expect to see a new memory standard and quite possibly a wider bus for RDNA3, but I highly doubt we'll seen any RDNA2 product released with more than 256-bit 16Gbps GDDR6.
 
  • Rep+
Reactions: 8051
1 - 15 of 15 Posts
Top