Overclock.net banner

1 - 20 of 96 Posts

·
Registered
Joined
·
854 Posts
Discussion Starter #1
 
  • Rep+
Reactions: darkage

·
Not a linux lobbyist
Joined
·
2,303 Posts
There are more aspects to node size than transistor density. If the reduction in power by going 7 didn’t at least keep up with the reduction in area then the power density would increase with choosing TSMC over Samsung.
I’ve heard Nvidia 8 runs cool and AMD 7 runs hot. If there are enough transistors, I would personally prefer cooler silicon over a cooler room.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
854 Posts
Discussion Starter #3
There are more aspects to node size than transistor density. If the reduction in power by going 7 didn’t at least keep up with the reduction in area then the power density would increase with choosing TSMC over Samsung.
I’ve heard Nvidia 8 runs cool and AMD 7 runs hot. If there are enough transistors, I would personally prefer cooler silicon over a cooler room.
Did you actually view AdoredTV's analysis or are you just firing from the hip?

Cliff notes version (posted as reply over at EVGA forum)

kevinc313 said:
vulcan1978 said:
I'll watch it later but do you have cliff notes please?
Basically NV went with Samsung 8nm EUV over TSMC 7nm because of wafer cost (~$3000 per AdoredTV's estimation vs $8000) but when you break down the yields with a 425mm2 chip (7nm equivalent to GA-102-200) the savings work out to being roughly $75 vs $47 or around $26 per yield but having made this decision NV has had to run the 8nm chips at much higher voltage and wattage (given the increased die size) and spend $155 on the FE cooler to mitigate the heat (per Igors Lab analysis) and they also have no Titan card this time around (because with the higher sized node there is no room for a larger more powerful die than the 3090) so they lost money on selling Titan cards and they will struggle to cool the mobile variants of this node in the laptops next year. They also pass the cost onto the consumer in the form of much higher electricity costs.

Jim also points out that the 3090 is essentially a rebranded $1500 80 Ti (it's clearly not a Titan card, irrespective of their allusion to the Titan in their marketing), so yes, they are basically shafting everyone again with ridiculous prices.

Ultimately they didn't save anything and this was a really stupid decision.

The video is absolutely worth a watch, you can turn on closed captioning if you struggle with Jim's thick Irish accent.

This is 100% spot on journalism here, there is no error prone speculating, it's very factual.

I tried to post this over at r/Nvidia and the mods took it down within 5 minutes.
 

·
Not a linux lobbyist
Joined
·
2,303 Posts
My point still stands. And yes I watched it earlier this morning.
Also I prefer an actual launch to a paper launch and we will soon see if choosing Samsung helped in this regard.
 
  • Rep+
Reactions: Intoxicus

·
Never Finished
Joined
·
2,298 Posts
Well, the wafer costs are still in the air, as Jim is just estimating what the costs are, and he did say that he could be off by a couple thousand dollars. I don't think wafer cost was the primary driving force behind the move to 8nm if Jim's numbers are accurate. But we'll probably get an update later on if some insiders tip him off after watching this video.
 

·
Newb to Overclock.net
Joined
·
4,128 Posts
First to market is an economics thing, is it not?
 

·
Registered
Joined
·
1,522 Posts
Did you actually view AdoredTV's analysis or are you just firing from the hip?

Cliff notes version (posted as reply over at EVGA forum)



Basically NV went with Samsung 8nm EUV over TSMC 7nm because of wafer cost (~$3000 per AdoredTV's estimation vs $8000) but when you break down the yields with a 425mm2 chip (7nm equivalent to GA-102-200) the savings work out to being roughly $75 vs $47 or around $26 per yield but having made this decision NV has had to run the 8nm chips at much higher voltage and wattage (given the increased die size) and spend $155 on the FE cooler to mitigate the heat (per Igors Lab analysis) and they also have no Titan card this time around (because with the higher sized node there is no room for a larger more powerful die than the 3090) so they lost money on selling Titan cards and they will struggle to cool the mobile variants of this node in the laptops next year. They also pass the cost onto the consumer in the form of much higher electricity costs.

Jim also points out that the 3090 is essentially a rebranded $1500 80 Ti (it's clearly not a Titan card, irrespective of their allusion to the Titan in their marketing), so yes, they are basically shafting everyone again with ridiculous prices.

Ultimately they didn't save anything and this was a really stupid decision.

The video is absolutely worth a watch, you can turn on closed captioning if you struggle with Jim's thick Irish accent.

This is 100% spot on journalism here, there is no error prone speculating, it's very factual.

I tried to post this over at r/Nvidia and the mods took it down within 5 minutes.
An alternative take:
TSMC lacks the capacity to produce the amount of GA102 dies Nvidia would need since AMD's Zen 3, RDNA2, and console orders are significantly larger. So even if Nvidia's margin per card is lower, they still make more money because they are able to sell significantly more units. I also find it unlikely that Nvidia went to all that engineering effort as a "band-aid" for an excessively power-hungry GPU, it's more likely they made the engineering effort and found out they could now use more power in their GPUs without additional noise.

As for voltage/power scaling, that's pure speculation. When we get our hands on the cards in a few days, we will be able to see if these cards scale down in power. Judging by how Pascal and Turing scaled down, I doubt the RTX 3080 will have troubled performance at a 200W Power Target. For reference, I could run my 2080 Ti Gaming OC at a 150W power target while losing roughly 20% performance compared to the stock 300W. Nvidia's big GPUs are just as power hungry as AMD's, the difference is that Nvidia has stricter power limits and a better boosting algorithm.

The whole point of making bigger dies with more execution units is to clock the card lower and achieve better performance/watt than previously, there's no "minimum clock speed" needed to effectively utilize an amount of ALUs.

The video is abslutely not worth a watch, as it's a typical AdoredTV/MLID speculation video. Your time is better spent learning about or testing components at a more fundamental level.
 
  • Rep+
Reactions: Intoxicus

·
WaterCooler
Joined
·
3,460 Posts
I tried to post this over at r/Nvidia and the mods took it down within 5 minutes.
Sounds about right.

I find the analysis refreshing considering a lot of the hype praise for something we haven't actually seen yet, though will concur that some of the numbers around wafer costs are speculative and estimations at best. We don't know. So that does go both ways. We will need to actually get 8nm 3080 and 7nm Big Navi tested and compared to see.

That said, last week I said I had Fermi vibes with Ampere all over again regarding the power draw/heat.

EDIT:

An alternative take:
TSMC lacks the capacity to produce the amount of GA102 dies Nvidia would need since AMD's Zen 3, RDNA2, and console orders are significantly larger.
I concur that this was likely a big factor, though it does point to Jim's mention of when they knew they needed to use Samsung 8nm while saying they were using TSMC 7nm.

The video is abslutely not worth a watch, as it's a typical AdoredTV/MLID speculation video. Your time is better spent learning about or testing components at a more fundamental level.
Agree with your conclusion once we have the cards, but not sure I'd place AdoredTV and MLID on the same level. Jim at least seems to have some thoughtful analysis and at this point in the game before the cards actually come out, it is interesting. MLID just seems to pull crud out of his behind.
 
  • Rep+
Reactions: Mooncheese

·
Registered
Joined
·
1,044 Posts
Eh I watched this afternoon already and while I enjoy Jim's educated speculations he is also wrong alot(ie. 5ghz ryzen chips and 5nm samsung for rtx3000)

He also guessing for wafer cost but none the less the major fact is that samsung was alot cheaper and had the capacity for nvidia, there no arguing that. Ignoring capacity is also foolish, what would it make sense for nvidia to go TSMC 7nm when they had no capacity??

I already pointed this out in the RTX 3000 thread but these things are going to be hot and power hungry. Also clock are going to be a bit disponting unless you wanna throw power consumption and heat totally out the window. Cool for the 1% who do high end water builds for the rest of the consumer not so great.

Also I disagree with his mobile point, there still potential nvidia could make a 3070 performance binned chip with reduced clocks to fit that need. 3080 and 3090 are just being pointed out as there a "let it run wild" desktop variant where performance is needed to be met and power and heat are after thoughts.
 

·
WaterCooler
Joined
·
3,460 Posts
Eh I watched this afternoon already and while I enjoy Jim's educated speculations he is also wrong alot(ie. 5ghz ryzen chips and 5nm samsung for rtx3000)
Agreed. Though regarding the Samsung 5nm video, he had logical reasons to believe that, but like you said, at the end of the day, it was wrong.
 
  • Rep+
Reactions: Mooncheese

·
Registered
Joined
·
363 Posts
Ampere 2.0 with a better process is the card to buy. These are just Fermi all over again.

My 1080 ti shall live on.
 

·
Registered
Joined
·
1,522 Posts
Agree with your conclusion once we have the cards, but not sure I'd place AdoredTV and MLID on the same level. Jim at least seems to have some thoughtful analysis and at this point in the game before the cards actually come out, it is interesting. MLID just seems to pull crud out of his behind.
They are both earning their living by speculating. When you throw **** at the wall, some of it is going to stick. I still remember how AdoredTV "got something right about the Zen 2 clock speeds" because some of the boost clocks on the actual products matched his predictions. AdoredTV and MLID tell the story their viewers want to hear: Nvidia/Intel bad, AMD good.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
1,345 Posts
They are both earning their living by speculating. When you throw **** at the wall, some of it is going to stick. I still remember how AdoredTV "got something right about the Zen 2 clock speeds" because some of the boost clocks on the actual products matched his predictions. AdoredTV and MLID tell the story their viewers want to hear: Nvidia/Intel bad, AMD good.
I've found that the like/dislike for Nvidia/Intel changes heavily based on the demographics of the community it's being discussed in. In lower income countries around the world, you can't have a discussion about the performance value Nvidia/Intel because their prices are higher.

People psychologically tend to dislike things they can't have because it makes it easier to handle the fact that they can't have it, by masking it with a perceived lack of desire to have it. So if their userbase is a bunch of younger people with no money, or people from 3rd world countries, they're obviously going to play to their base in order to get more likes/shares.

The question that remains at the end of the day, though, is with all the money they make from their channel, do they really end up using sub-par AMD GPUs in their own gaming builds? Or is it all talk?




EDIT: Watched a bit of the video. This guy isn't too bright. His argument about 8nm Samsung being bad is based on this logic: Nvida, over 4 generations of cards, only had an increase of 60% in core count. And on Ampere, they increased it by 2.5x. And they did so because they knew Samsung 8nm was trash and couldn't clock high. But that "logic" only works if you don't understand why Nvidia marketing is pushing 10,496 instead of 5,248. Which would really be a 20.5% core count increase over Turing. And the "core doubling" being a result of an architectural change from int/fp32 cores allocation.

So basically at that point, you should know to stop the video. Because anyone who doesn't even understand the basics of Ampere core doubling, is in no position to explain the inefficiency of Samsung 8nm. I don't really even need to get into the 28.3 billion transistor vs 18.6 billion transistors between the generations, the doubling of the ram (for the 3090), the actually higher clock speeds compared to Turing (1635MHz FE 2080Ti vs. 1700MHz FE 3090), the nearly doubling of rt/tensor core performance (which increases power usage when used, but doesn't show under core count or rasterization tflops) , or the comparative power usage when Ampere is matching frame rates with Turing cards.
 
  • Rep+
Reactions: Intoxicus

·
PC Evangelist
Joined
·
46,711 Posts
He is very wrong about a lot of things. Samsung N8 wafer is only about 30% cheaper than TSMC N7. Also, The density of the Samsung N8 is higher than Radeon 7 and 5700 XT and Xbox Series X. This ASIC cost estimation is completely off.
 
  • Rep+
Reactions: Intoxicus

·
Premium Member
Joined
·
664 Posts
Watched the video. Not sure what to make of it.
 

·
Registered
Joined
·
1,345 Posts
Wasn't there a fall out between Nvidia and TSMC? Where TSMC doesn't like Nvidia anymore as a customer? So where is Nvidia supposed to go.
No it was just bad negotiations. Nvidia thought it could play hardball and threaten to take their business elsewhere if TSMC didn't lower their prices more. TSMC said they already had enough demand. So Nvidia signed with Samsung. Nvidia later realized TSMC was superior to Samsung. But by that time, it had already signed a deal with Samsung, and TSMC was already booked solid and couldn't supply what Nvidia needed even if Nvidia wanted to pay for it. So Nvidia paid a premium to TSMC for the smaller quantity needed for their A100 card, and everything else including Quadro cards go to Samsung.

Was just a bad gamble by Nvidia in thinking Samsung/TSMC nodes were similar in performance. But obviously no falling out since A100 is still made on TSMC 7nm.
 

·
Premium Member
Joined
·
10,766 Posts
Wasn't there a fall out between Nvidia and TSMC? Where TSMC doesn't like Nvidia anymore as a customer? So where is Nvidia supposed to go.
TSMC sets its price Nvidia tried to play them against Samsung and TSMC didn't budge and Nvidia got stuck out in the cold.

TSMC doesn't do business on who they like, they do it with who pays their fees and they have their capacity sold so there is ZERO leverage for Nvidia to pull and they are stuck with a substandard process.

And it looks like not only are they using a substandard process but they are not even using the EUV but the older DUV (currently 8nm) and thats why their cards are sucking power like a wild banshee.

@HyperMatrix U ninja'd me! :D
 

·
Registered
Joined
·
854 Posts
Discussion Starter #20
An alternative take:
TSMC lacks the capacity to produce the amount of GA102 dies Nvidia would need since AMD's Zen 3, RDNA2, and console orders are significantly larger. So even if Nvidia's margin per card is lower, they still make more money because they are able to sell significantly more units. I also find it unlikely that Nvidia went to all that engineering effort as a "band-aid" for an excessively power-hungry GPU, it's more likely they made the engineering effort and found out they could now use more power in their GPUs without additional noise.

Good point about TSMC queue being full.

As for voltage/power scaling, that's pure speculation. When we get our hands on the cards in a few days, we will be able to see if these cards scale down in power. Judging by how Pascal and Turing scaled down, I doubt the RTX 3080 will have troubled performance at a 200W Power Target. For reference, I could run my 2080 Ti Gaming OC at a 150W power target while losing roughly 20% performance compared to the stock 300W. Nvidia's big GPUs are just as power hungry as AMD's, the difference is that Nvidia has stricter power limits and a better boosting algorithm.

Technically, if the leaked 3DMark benches are anything to go by (not to be confused with Open CL, which shows how good Ampere will be at mining, i.e. GA-102-300 being 40% faster on average whereas it's only ~27% faster in rasterization and 45% faster in RT going by 3DMark), GA-102-300 at 60% less power draw will be 60% slower in both rasterization and RT or about 35% slower in rasterization and 15% slower in RT compared to TU-102 @ 260w.

I have to call bullshit on you running 2080 Ti @ 20% under 100% of TU-102 @ 260w @ only 150w unless you have a golden sample binned chip that can do 2100 @ .91v (my XC2 requires 1.025v for 2040 MHz core at 41C load) that is similarly under loads of water.

We are talking about air cooled 2080 Ti FE @ 75C load @ 280w vs 3080 @ 320w ~71C load not what LN2 TU-102 can do at sub zero.

That brings me to another point, it seems that Ampere power delivery doesn't allow for more than 375w (FE and "reference") with AIB's not offering much more power (Asus Strix has a listed 400w limit as of this writing) maybe some cards, i.e. Kingpin and HOF, will be able to do upwards of 500W but the vast majority of 3080 and 3090's will limited to 375w unless shunt modded. So you have what, 350-375w = 7% increase in overclocking headroom on air? Even if you bring the component temp down to ~40C GA-102-200 and 300 will struggle attaining more than 2000 MHz whereas good TU-102 can do 2200 MHz at 40C (i.e. Kingpin, Gigabyte Aorus Extreme WB). Overclocked TU-102 at the same power level (450w) shrinks the rasterization performance disparity from 27% to 17%, watt for watt, and that's because GA-102 is already 93% into it's power limit @ 320 and 350W (out of 370w and 375w, "reference" GA-102-300 is actually limited to 350W) 0% whereas AIB and shunt modded TU-102 is only 260w into the same limit.

260w to 475w = 80%

Considering TU-102 freq scales with TDP linearly up to 2100 MHz or so (I can do 2100 MHz @ only 1.055v @ 42C and my sample runs about 100 MHz lower than golden sample binned TU-102's in the Kingpin at the same voltage.) at which point increasing the voltage and power draw yields precipitously diminishing returns. (I gain 60 MHz going from 1.025 to 1.055v and my power draw goes up about 30w on average, meaning 2040-2055 MHz at 1.025v is the sweet spot for my card and adding more voltage and power scales logarithmically in terms of wattage required vs freq gained.

GA-102-200: 320w to 475w (presuming the non-reference AIB's will be allowed to go this high, from what we are seeing with "reference" 3080 being limited to 350w there is not a lot of hope for this given the fact that the AIB's still have not published / disclosed the technical aspects of their cards (outside of Asus who stated the 3090 Strix will have a 400w limit).

TU-102 was insane in terms of power limits, or rather lack thereof. It may be too early to tell but if the 3090 Strix is limited to 400w this isn't a good sign.

Whether or not Nvidia will continue to allow AIB's to have near unlimited TDP:

Strix OC = 1kw
Lightning Z = 1kw
Kingpin = 2kw
Galax HOF = 2kw

Is a question. There's also the question as to whether or not "reference" bios will be compatible with FE as the board and power delivery is completely different. For example Kingpin TU-102 bios is not compatible with reference PCB and reference 2080 Ti wasn't that dissimilar to AIB's. (I'm running FTW3 bios on my reference PCB. XC2 is reference PCB with the iCX thermistors from FTW3, love this card honestly). Will you be able to flash FTW3 bios to FE this time around? Probably not, but that's just my guess. Also is the question as to whether or not NV will make it difficult if not impossible to unlock bios control on FE, or who knows, all the cards?


The whole point of making bigger dies with more execution units is to clock the card lower and achieve better performance/watt than previously, there's no "minimum clock speed" needed to effectively utilize an amount of ALUs.

Correct but imagine how many more execution units would be possible on 7m TSMC? If a 425mm2 TSMC 7 die is the compute equivalent of a 692mm2 (?) Samsung 8nm EUV imagine a 692mm2 TSMC 7nm die?

The merit of your debate is the question as to whether not NV went with Samsung 8nm because in order to cut the line or orders at TSMC who is at capacity making Zen3 and RDNA2 (technically next gen consoles are RDNA2, same node and efficiency) they would have to pay TSMC 2 $8000 per wafer (or more) when Samsung will charge $3000 per wafer.

Also is the question as to whether as to whether or not NV went with Samsung to get their products to market ahead of AMD, which should say something about what NV thinks of RDNA2. I hear that 6900XT will sit in between a 3080 and 3090 rasterization-wise, also with 24GB of video memory, but at $1000. The question is whether or not AMD can compete with NV in terms of technologies. It's possible that they can (both next gen consoles will be capable of RT and it's the same architecture, possibly even more efficient than RT cores).

I think Jim's argument hinges on whether or not TSMC was at capacity or not. I've read that they are not actually at full capacity.


The video is abslutely not worth a watch, as it's a typical AdoredTV/MLID speculation video. Your time is better spent learning about or testing components at a more fundamental level.

Everyone is entitled to their own personal viewpoint but we should be more objective with our assessment of the situation and not allow personal bias to cloud our reasoning.

Also, you mispelled absolutely.
 
1 - 20 of 96 Posts
Top