Overclock.net banner

21 - 40 of 96 Posts

·
Vermin Supreme 2020
Joined
·
25,770 Posts
my first thought when seeing it was on sammy 8 was 'welp, maybe I'll wait for the TSMC 7 refresh then"

but lil ol me doesn't actually know how much difference can be baked into 1nm shift & slightly different process.

NV will completely re-release Ampere as 2.0 w. TSMC 7nm++++++++, once TSMC is able to get up to speed. I mean hell, aren't they building a new fab in the USA? N I recall Jensen being stoked about the news/chipping in.

in the end though, i'll likely still grab a 3090, I don't mind Fermi 3.0, as long as it blows 2080ti out of the water... specially since AMD appears to have zero heat.
 
  • Rep+
Reactions: Mooncheese

·
Overclocker
Joined
·
11,339 Posts
@HyperMatrix @Heuristic, that's what I thought. NV trying and trying but failing because they were doomed to from the start. There is no leverage over a manufacturer who's sold out, constantly and you have almost nowhere else to go to get your goods made. Would have been funnier if both TSMC and Samsung were like, nah, nope, go away :D

Also the names these days for 8nm, 7nm, 10nm, they all define it how ever they want themselves, it's not standardized. It's a marketing name and the actual physical aspects
can be similar even when the numbers in names are not the same. Intel vs Samsung vs TSMC vs GF ... the names don't really matter that much anymore. What is more interesting is EUV or not.
 

·
Taking it Easy
Joined
·
4,063 Posts
This is pretty on par with most things people have been saying about Ampere. The yields definitely aren't going to be as good on Samsung as on TSMC and there could very well be a good shortage of cards at release, pumping up the cost of the AIB cards until the supply evens out. The power consumption is also insane - all this information is out there. He might not be accurate about a lot of these numbers, but these things will be crazy power hungry and good luck getting your hands on a Founders Edition.
 
  • Rep+
Reactions: Mooncheese

·
Vandelay Industries
Joined
·
1,924 Posts
People psychologically tend to dislike things they can't have because it makes it easier to handle the fact that they can't have it, by masking it with a perceived lack of desire to have it. So if their userbase is a bunch of younger people with no money, or people from 3rd world countries, they're obviously going to play to their base in order to get more likes/shares.
I'm fine with everything else you said except this over generalization because that's what it is, an over generalization, specifically when applied to the context of computer hardware brands.
 

·
Premium Member
Joined
·
10,765 Posts
@HyperMatrix @Heuristic, that's what I thought. NV trying and trying but failing because they were doomed to from the start. There is no leverage over a manufacturer who's sold out, constantly and you have almost nowhere else to go to get your goods made. Would have been funnier if both TSMC and Samsung were like, nah, nope, go away :D

Also the names these days for 8nm, 7nm, 10nm, they all define it how ever they want themselves, it's not standardized. It's a marketing name and the actual physical aspects
can be similar even when the numbers in names are not the same. Intel vs Samsung vs TSMC vs GF ... the names don't really matter that much anymore. What is more interesting is EUV or not.
You should watch this when you get a chance, its damn good.

 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
1,332 Posts
I think people are missing 2 key points here. I'll start off by saying that I'm a big fan of TSMC, and hate everything about Samsung except for their superior TVs and awesome SSDs. And in saying that, I also wish these chips were made by TSMC. That having been said....

1) People bring up transistor density and how many more cores could have been added if using TSMC's 7nm EUV process. There would likely have been 0 additional cores if using TSMC. The 3080 die is 628mm2. That is very large for a node after a shrink. Want to see Nvidia's history of die size for the xx80 class of cards? Here:

680 = 320mm2
780 (same die as 780ti and titan) = 561mm2
980 = 398mm2
1080 = 314mm2
2080 = 545mm2
3080 = 628mm2

Nvidia has never been shy about using smaller dies when there was no competitive reason to do so. As rumors/leaks have stated so far, AMD's Big Navi is likely similar to or slightly weaker than a 3080. That's all Nvidia needed to do. They would have no need to put out anything more powerful if they're already beating their competition easily.

2) Other people bring up power usage and how even if the GPUs didn't have more cores, they would have been on a smaller die, and used less power. This is true. However, the amount of the power savings is less than what people think. Power usage doesn't drop linearly as nodes shrink. Also compare the power usage based on number of transistors. If you account for the increased clock on the 3090 vs the 2080ti, and the increase in transistor count, you're looking at like 58.5% increase in power consumption if we were still on the same 12nm node as turing. So a 295W 2080ti with a 58.5% increase would put you up to 467W. So then account for the node shrink from 12nm to 8nm. Node shrink from 12nm to 8nm is never going to be linear. You won't get a 33% reduction in power usage. In a perfect world, if that were the case, we'd see a 311W TDP for the 3090. But we don't. We see a 25% drop going from TSMC 12nm to Samsung 8nm. That's still respectable.

3) People mention EUV. We're not sure it would have been viable. Even the A100 made on TSMC's 7nm is not utilizing EUV, despite cost being a relatively minor issue for a $10,000 card. That card uses only 250W, and is a bigger die, but it's also clocked much lower at just 1.4GHz. So we have no idea what the ratio between power efficiency and clock speeds are on TSMC 7nm vs Samsung 8nm, let alone whether EUV would have been a viable option in terms of performance. Even if all of that was the case, you'd then be stuck looking at either less powerful cards at the same price but with lower power usage, or more expensive cards with same performance but lower power usage.

So to recap:

Is TSMC 7nm better than Samsung 8nm? Yes. Absolutely.
Would it have made more than a 10-20% difference in power usage? No
Would it have made the cards more expensive? Yes
Would it have resulted in higher performance? Most likely not. We may know more once RDNA2 releases

End of the day...please relax...Samsung is a con rather than a pro, but it's not as huge of a deal as some are making it out to be.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
386 Posts
I've been saying for well over a year that I didn't believe NVidia would get get any 7nm fab capacity from TSMC for their next gen main stream consumer GPUs.

NVidia cannot compete with AMD's raw purchasing power for fab capacity with TSMC and logically TSMC will prioritise their largest customers, in this case AMD > NVidia.

AMD 7nm:
PS5
Xbox
Ryzen Mobile
Ryzen desktop
Radeon GPUs
Threadripper
Epyc

AMD effectively have a strangle hold on 7nm capacity from TSMC. NVidia would be ordering no where near the volume that AMD are, nor willing to commit to as long term contractual commitments and volume.

The only reason NVidia went with Samsung and their clearly inferior and subpar 8nm node is they couldn't get capacity at TSMC.
 

·
Registered
Joined
·
387 Posts
Other people have mentioned it thoroughly but I find this hard to take seriously without the video really mentioning the facts regarding TSMC and their limited, heavily booked fab capacity for 7nm. I can only imagine 7nm EUV costs too, I'd say there's a reason AMD for example, is sticking with 7nm DUV. There's so many factors that are overlooked but I know how difficult to it is to consider them all. From all information so far especially on one of Samsung's older "8nm" node, I've heard the capacity is there and may be of significant to Nvidia, to have more chips and thus cards to sell rather than the limited amount via TSMC they'd only ever had access to. They've been booked up for a long time from so many vendors wishing to produce their silicon.

Just my quite, personal view on how it seems arbitrary to call it Nvidia's "biggest mistake" without access to the all the information Nvidia as a partner had access to before signing any deal. To Nvidia, it's hard to imagine it's a mistake in their eyes if they manage to sell these cards an an increased TDP or rather TBP over AMD if the performance is there*. People will buy it regardless of the power consumption and dissipated heat into their room if the performance is there, in my opionion.
 

·
Registered
Joined
·
829 Posts
Discussion Starter #29
I've found that the like/dislike for Nvidia/Intel changes heavily based on the demographics of the community it's being discussed in. In lower income countries around the world, you can't have a discussion about the performance value Nvidia/Intel because their prices are higher.

People psychologically tend to dislike things they can't have because it makes it easier to handle the fact that they can't have it, by masking it with a perceived lack of desire to have it. So if their userbase is a bunch of younger people with no money, or people from 3rd world countries, they're obviously going to play to their base in order to get more likes/shares.

The question that remains at the end of the day, though, is with all the money they make from their channel, do they really end up using sub-par AMD GPUs in their own gaming builds? Or is it all talk?

EDIT: Watched a bit of the video. This guy isn't too bright. His argument about 8nm Samsung being bad is based on this logic: Nvida, over 4 generations of cards, only had an increase of 60% in core count. And on Ampere, they increased it by 2.5x. And they did so because they knew Samsung 8nm was trash and couldn't clock high. But that "logic" only works if you don't understand why Nvidia marketing is pushing 10,496 instead of 5,248. Which would really be a 20.5% core count increase over Turing. And the "core doubling" being a result of an architectural change from int/fp32 cores allocation.


This is partly false. Although technically NV didn't physically double the core count they are capable of con-current FP32 processing per SM and 74% of an average rendering stream is FP32. Another way to look at it is that technically 1 Ampere core = .7 Turing cores.

So basically at that point, you should know to stop the video. Because anyone who doesn't even understand the basics of Ampere core doubling, is in no position to explain the inefficiency of Samsung 8nm. I don't really even need to get into the 28.3 billion transistor vs 18.6 billion transistors between the generations, the doubling of the ram (for the 3090), the actually higher clock speeds compared to Turing (1635MHz FE 2080Ti vs. 1700MHz FE 3090), the nearly doubling of rt/tensor core performance (which increases power usage when used, but doesn't show under core count or rasterization tflops) , or the comparative power usage when Ampere is matching frame rates with Turing cards.

Technically NV aren't incorrect in stating that they doubled the cores considering that 74% of an rendering stream is FP32, the other 26% INT32 (which has not doubled). That Jim doesn't elaborate on the methodology used to come to that core count could be a personal choice to emphasize the gain core count as being on the larger, less efficient node means that their only option for performance without a repeat of Turing's lackluster generational increase is to increase the core count, and that means large dies with less yields. He posits that a 425mm2 7nm TSMC node would be equivalent to Samsung's 8nm EUV 628mm2 node which would be capable of higher clocks due to less power draw and voltage required and a massive increase in freq. (2.3-2.5 GHz) See: Pascal vs Maxwell, same architecture, but a node drop from 28 to 16nm: 1500 MHz to 2000-2100 MHz (my MSI 980 TI 6G Gaming did right around 1536 MHz @ 1.255v and my 2080 Ti does 2100 MHz @ 1.055v). Even non overclocked, 980 Ti did 1250 MHz whereas 1080 Ti did 1850 MHz on factory clocks.

As AdoredTV tried to point out in the video, 7nm TSMC isn't simply 1mm smaller than 8nm, it's a much more efficient node.

And youre comparing boost clocks that are meaningless, because 2080 Ti FE doesn't do 1635 MHz under testing, it does 1850 Mhz for like 5 minutes and then settles down to 1750 Mhz @ 75C. We haven't seen what the 3080 boosts too but if if it only gained 65 MHz stated boost having halved the node size I would say that's a colossal failure.


28nm 980 Ti = 1250 MHz @ 260w
16nm 1080 Ti = 1750 MHz @ 260w
12nm 2080 Ti = 1750 MHz @ 260w
8nm 3080 = 1825 MHz (?) @ 320w

8nm EUV isn't efficient. It should be clocking WAY higher.

It should be doing 2.2 GHz with no overclock and 2.5 GHz+

I stated this in another comment but it's also unlikely that GA-102, particularly the 3090, will be good for anything more than a 20% overclock given the fact that they seem to effectively be overclocked from the factory to compensate from this inefficiency.

GA-102-300
350W = 100% PT (370w limit for FE, 350w for "reference", top end AIB is yet to be determined for this SKU)

Is roughly 27% faster in rasterization vs TU-102 @ 260w (320w limit for FE and some non-reference variants, my XC2 is limited to 340w but I'm at 373w with FTW3 bios)

Seeing as how Turing requires exponentially more voltage and power for freq and performance increases above 2150 MHz @ 1.1v although some bios are virtually unlimited most cards run best at no more than 450-475w under a lot of water. LN2 is a different story.

So say top stack AIB has cards that, assuming, are unlimited, or say even 500w, the increase in performance gained from 320 and 350w to 475 (roughly 50%) isn't the same as increase in performance from 260 to 475w (83%).

My TU-102 300a is roughly 25% faster than @ factory clocks @ 2100 MHz core / 7900 MHz memory @ 373w @ 1.055v @ ~42-45C.

25% faster for a 44% increase in power (16,700 Timespy @ 373w vs 13,600 Timespy GPU @ 260w)

Could I do another 100 MHz locking the voltage to 1.093v with an XOC bios? Probably, but it would probably require another 75-100w.

This is my point.

What we have with Ampere may be that both GA-102-300 and GA-102-200 are already 375w into their 475w theoretically performance per watt limit (where more performance is only possible via cooling the core with LN2 and bringing voltage and wattage down).

I mean this doesn't help:


This was an overclock attempt with a "reference" PCB variant limited to 350w.

If you look the core is limited to 1935 MHz! So much for your statement:

"the actually higher clock speeds compared to Turing"

Given that performance stops scaling with power and voltage after 2000 MHz with TU-102.

But wait, this "reference" card is only doing 17,200 in Timespy with an overclock @ 350w?

That's not promising and most definitely in alignment with the carefully curated benchmarks we've seen thus far from Digital Foundery (Control, SOTTR, and Doom Eternal).

There is a video floating around that I'm sure everyone here has seen of overclocked (and watercooled, hence why I think the vid is bullshit) that shows TU-102 @ 320w doing 16,700 Timespy GPU.
He is very wrong about a lot of things. Samsung N8 wafer is only about 30% cheaper than TSMC N7. Also, The density of the Samsung N8 is higher than Radeon 7 and 5700 XT and Xbox Series X. This ASIC cost estimation is completely off.
Can you point to the information accurately showing the prices of 7nm TSMC and 8nm EUV?

Other people have mentioned it thoroughly but I find this hard to take seriously without the video really mentioning the facts regarding TSMC and their limited, heavily booked fab capacity for 7nm. I can only imagine 7nm EUV costs too, I'd say there's a reason AMD for example, is sticking with 7nm DUV. There's so many factors that are overlooked but I know how difficult to it is to consider them all. From all information so far especially on one of Samsung's older "8nm" node, I've heard the capacity is there and may be of significant to Nvidia, to have more chips and thus cards to sell rather than the limited amount via TSMC they'd only ever had access to. They've been booked up for a long time from so many vendors wishing to produce their silicon.

Just my quite, personal view on how it seems arbitrary to call it Nvidia's "biggest mistake" without access to the all the information Nvidia as a partner had access to before signing any deal. To Nvidia, it's hard to imagine it's a mistake in their eyes if they manage to sell these cards an an increased TDP or rather TBP over AMD if the performance is there*. People will buy it regardless of the power consumption and dissipated heat into their room if the performance is there, in my opionion.
Yes agreed, people will still buy it, including myself (still getting the 3090).

Although I'm still buying the product I believe that an informed and knowledgeable consumer-base that has these kinds of discussions is pursuant to a consumer empowerment.
 

·
Registered
Joined
·
293 Posts
Prediction: 9 months from now reviews on refreshed cards on better process "this is what 1st ampere cards should have been."
 

·
Registered
Joined
·
1,332 Posts
I'm sorry @Mooncheese but there is just so much wrong in what you're saying that it would take more effort than it's worth to correct it. For example, you quoted the 1935MHz OC on the 3080 test, which is an article I was previously aware of, that specifically states that they did NOT attempt to see how much they could overclock the GPU, and that they only tested the memory overclock. Along with boosting the memory clock from 19Gbps to 20Gbps, they could only bump the Core clock up by +70. BUT....you missed the most important part. This sample card was TDP locked at 320W. So this card, without any increase to power usage, was able to clock to 1935MHz along with a quite decent 5.2% boost in memory bandwidth. When the links to articles you yourself are quoting doesn't state what you are claiming or you intentionally skip over pertinent details, it makes it hard to have a discussion. It seems like you're searching for evidence to support what you want to believe, rather than figuring out what's actually happening.

Either way, reviews for the 3080 will be out in 7.5 hours. So there's really no point in either of us continuing to argue over this.
 

·
Village Idiot
Joined
·
2,367 Posts
The video is abslutely not worth a watch, as it's a typical AdoredTV/MLID speculation video. Your time is better spent learning about or testing components at a more fundamental level.
Good luck convincing a diehard conspiracy theorist that AdoredTV or Moore's Law Is Dead aren't just clueless nitwits throwing speculative **** at the wall until something sticks and reaping YouTube revenue as a result.
 

·
Registered
Joined
·
595 Posts
Keen to see reviews but something doesn't seem quite right with these 8nm cards

Specifically, if you run a 2080ti at 320w and a 3080 at 320w, how small is the performance difference going to be? Under 10%?
 

·
Registered
Joined
·
11 Posts
I'm sorry @Mooncheese but there is just so much wrong in what you're saying that it would take more effort than it's worth to correct it. For example, you quoted the 1935MHz OC on the 3080 test, which is an article I was previously aware of, that specifically states that they did NOT attempt to see how much they could overclock the GPU, and that they only tested the memory overclock. Along with boosting the memory clock from 19Gbps to 20Gbps, they could only bump the Core clock up by +70. BUT....you missed the most important part. This sample card was TDP locked at 320W. So this card, without any increase to power usage, was able to clock to 1935MHz along with a quite decent 5.2% boost in memory bandwidth. When the links to articles you yourself are quoting doesn't state what you are claiming or you intentionally skip over pertinent details, it makes it hard to have a discussion. It seems like you're searching for evidence to support what you want to believe, rather than figuring out what's actually happening.

Either way, reviews for the 3080 will be out in 7.5 hours. So there's really no point in either of us continuing to argue over this.

He's been doing the same in the EVGA forums. Some people just want to confirm their bias I suppose.
 

·
Registered
Joined
·
1,332 Posts
Keen to see reviews but something doesn't seem quite right with these 8nm cards

Specifically, if you run a 2080ti at 320w and a 3080 at 320w, how small is the performance difference going to be? Under 10%?
should still be 30-40% in rasterization. Higher on heavy rt workloads.

something to remember, as long as I’m understanding this correctly, is that they’re using TGP due to the heavy power usage of RT/Tensore cores that have been boosted. So I’m guessing that you’re only going to hit 320W at stock clocks if you’re utilizing full RT and DLSS as well.

this would be neat to see though. Power draw tested based on full rasterization only performance, then with RT and DLSS maxes out. Let’s hope the reviewers do a good job. We’re all doing a fair bit of guesswork at this time. Haha.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
829 Posts
Discussion Starter #37
I'm sorry @Mooncheese but there is just so much wrong in what you're saying that it would take more effort than it's worth to correct it. For example, you quoted the 1935MHz OC on the 3080 test, which is an article I was previously aware of, that specifically states that they did NOT attempt to see how much they could overclock the GPU, and that they only tested the memory overclock. Along with boosting the memory clock from 19Gbps to 20Gbps, they could only bump the Core clock up by +70. BUT....you missed the most important part. This sample card was TDP locked at 320W. So this card, without any increase to power usage, was able to clock to 1935MHz along with a quite decent 5.2% boost in memory bandwidth. When the links to articles you yourself are quoting doesn't state what you are claiming or you intentionally skip over pertinent details, it makes it hard to have a discussion. It seems like you're searching for evidence to support what you want to believe, rather than figuring out what's actually happening.

Either way, reviews for the 3080 will be out in 7.5 hours. So there's really no point in either of us continuing to argue over this.
Bullshit, they did overclock the core by 70 MHz, are you illiterate?



He's been doing the same in the EVGA forums. Some people just want to confirm their bias I suppose.
Bias? I was unaware that having an objective perspective was bias.

should still be 30-40% in rasterization. Higher on heavy rt workloads.
Incorrect, the leaded 3DMark benches, of particular relevance Timespy, which has newer effects found in modern titles, shows a 27% increase on average.

Add in the fact that the 3080 is doing this at 320w, which means a maximum theoretical increase of 50% TDP (assuming NV allows the top tier AIB's to have 450-500w bios this time around) as anything beyond 475w and 1.1-1.15-1.2v is the purview of LN2 versus going from 260 to 475w, an 83% increase in TDP.

In fact, the 3080 is doing 17,200 in that Timespy with +70 on the core and +650 on the memory @ 320w!

2080 Ti can do 15,200 on air at 320w! That's a 13% difference!

Surprise! Ampere doesn't clock higher than Turing! This will be confirmed in 7 hours!

Pertinent details?

Let's see GA-102-300 is clocked so aggressively from the factory that not much more can be squeezed out of it on air an the "reference" variants are limited to 350w!

320 to 350 is a 9.4% increase in power whereas 260 to 320 (2080 Ti FE) is a 23% increase in power!

So youre already 15% into a 23% overclock!

That means that the 27% average is reduced by the difference of 14% or rather, overclocked 2080 Ti is going to only be around 14% slower than overclocked 3080 in rasterization!

How is this any better than the difference between 1080 Ti and the 2080?!

What's interesting is that overclocked 2080 Ti FE @ 15,200 is 13% less than overclocked 3080 @ 17,200 which just so happens to line up with the 14% overclocked 2080 Ti vs overclocked 3080! above!

Please bear in mind that I'm probably picking up the 3090, so please do not conflate this as something other than brutally honest analysis and please save your hi-falutin comments about "only the wealthy people like me will buy the finest electronics!"

We can share rigs if you like:

Here's mine, can't wait to see yours:
 
  • Rep+
Reactions: aaronman

·
Registered
Joined
·
829 Posts
Discussion Starter #38

On average than RTX 2080 Ti is 24% slower than RTX 3080, RTX 2080 SUPER is 37% slower, RTX 2070 SUPER – 45%, and RTX 2060 SUPER is 50% slower.
 

·
Registered
Joined
·
1,332 Posts
On average than RTX 2080 Ti is 24% slower than RTX 3080
You know that if the 2080ti is 24% slower it means the 3080 is 31.6% faster, right?

just wait for the reviews mate. You’re getting really worked up.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
829 Posts
Discussion Starter #40 (Edited)
And basically, yeah Jim from AdoredTV is correct, the 3090 is ultimately the 2080 Ti all over again, this time +300 on top of $1200, and at 375 with a 450-500w maximum power draw on the efficiency curve it wont overclock like TU-102 (from 260w to 475w). You might get another 20% out of it at 450w whereas TU-102 @ 2200 MHz is like a 31% overclock (17,500-17,750 Timespy GPU).

Basically NV had to pre-overclock the cards from the factory in order for Samsung 8nm EUV GA-102-300 to have a ~25% gap on TU-102 and for GA-102-200 to have a ~45% gap on it but when you overclock both cards reduce the final figure by 10% because TU-102 can overclock higher more (~30% vs ~20%).

Again, 260w to 475w = 83% increase in power
320 to 475w = ~48.3% increase in power

Non-overclocked / shunted, top tier AIB (i.e. Kingpin):

260-320w =23% increase in power
320-350 = 9% in power ("reference")
320-370 = 15% in power (FE, limited run)



Overclocked 2080 Ti will be 15% slower than overclocked 3080 and 35% slower than the 3090 in rasterization.

RT is a different story (leaked bench shows the 3080 doing Port Royal 45% faster than 2080 Ti, but we may as well reduce this amount by 10% at least comparing overclocked 2080 Ti to the 3080 considering that Samsung B-Die can do +1000 MHz whereas Micron runs too hot / requires more voltage and doesn't clock as high.

I picked up 1k points in Port Royal solely from a memory overclock of +1000 MHz.

That's fairly massive just from a memory overclock.

10,369: https://www.3dmark.com/pr/251502

vs the 3080 @ 11,412
 
21 - 40 of 96 Posts
Top