Overclock.net banner

41 - 60 of 96 Posts

·
Registered
Joined
·
830 Posts
Discussion Starter #41
You know that if the 2080ti is 24% slower it means the 3080 is 31.6% faster, right?

just wait for the reviews mate. You’re getting really worked up.
Is this an attempt at humor? If the 2080 Ti is 24% slower that means the 3080 is 24% faster.

I'm worked up because youre a willfully ignorant fanboy and your kind of thinking degrades the hobby.
 

·
Registered
Joined
·
830 Posts
Discussion Starter #42

·
Registered
Joined
·
1,332 Posts
Is this an attempt at humor? If the 2080 Ti is 24% slower that means the 3080 is 24% faster.

I'm worked up because youre a willfully ignorant fanboy and your kind of thinking degrades the hobby.
It's not my fault you don't understand basic math....

So let's say the RTX 3080 got 100fps. The RTX 2080ti in your example, got 24% less FPS than it. 100FPS - (100*0.24) = 76fps.

Now...let's just remember the 76FPS. Forget everything else. If you have 76FPS, and you increase your performance by 10%, what do you get? You get 76 + (0.1*76) = 83.6fps which is the same as 76 * 1.1, which is 10% more. Now let's do that same thing for 24%. 76 + 24% = 76 + (76*0.24) = 76*1.24 = 94.24fps.

Let me give you a simpler example. 30 is 50% more than 20. But 20 is only 33% less than 30. Because in either case, the difference between them is 10. But 10 is a lesser percentage of 30 than it is of 20. 10/30 = 0.33333 while 10/20 = 0.5. Do you understand yet?
 

·
Registered
Joined
·
1,395 Posts
Must.. keep.. speculating!
 
  • Rep+
Reactions: Intoxicus

·
Tinkerer
Joined
·
193 Posts
AdoredTV calling the Samsung 8nm decision dumb without all the facts about commercial, financial, and legal factors, beggars belief, honestly.
Because yeah, nVidia (you know, the company that's buying up ARM) should have consulted AdoredTV on the decision to use Samsung 8nm.... lol.

Where's my 5GHz Zen 2 Jim?
 

·
mfw
Joined
·
8,621 Posts
Must.. keep.. speculating!
AdoredTV calling the Samsung 8nm decision dumb without all the facts about commercial, financial, and legal factors, beggars belief, honestly.
Because yeah, nVidia (you know, the company that's buying up ARM) should have consulted AdoredTV on the decision to use Samsung 8nm.... lol.

Where's my 5GHz Zen 2 Jim?
These about sum it up for me.

You need to be very delusional to do something as ironic as calling nVidia DUMB - the company that is ahead of everyone else in artificial INTELLIGENCE.
 
  • Rep+
Reactions: Intoxicus

·
Village Idiot
Joined
·
2,367 Posts
Is this an attempt at humor? If the 2080 Ti is 24% slower that means the 3080 is 24% faster.

I'm worked up because youre a willfully ignorant fanboy and your kind of thinking degrades the hobby.
You cannot be serious...
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
690 Posts
AdoredTV calling the Samsung 8nm decision dumb without all the facts about commercial, financial, and legal factors, beggars belief, honestly.
Because yeah, nVidia (you know, the company that's buying up ARM) should have consulted AdoredTV on the decision to use Samsung 8nm.... lol.

Where's my 5GHz Zen 2 Jim?
I have no idea about any node process or the differences between all these NMs, but I cannot consider this respectable content simply because he's saying Nvidia is making stupid decisions.

This is the part of the Youtube arrogance I cannot tolerate. The community simply doesn't have the technical and non-technical (equally important) knowledge yet it pretends to know everything simply because whatever % they think of is close to what it actually turns out to be.
 

·
Performance is the bible
Joined
·
7,043 Posts
Did you actually view AdoredTV's analysis or are you just firing from the hip?

Cliff notes version (posted as reply over at EVGA forum)



Basically NV went with Samsung 8nm EUV over TSMC 7nm because of wafer cost (~$3000 per AdoredTV's estimation vs $8000) but when you break down the yields with a 425mm2 chip (7nm equivalent to GA-102-200) the savings work out to being roughly $75 vs $47 or around $26 per yield but having made this decision NV has had to run the 8nm chips at much higher voltage and wattage (given the increased die size) and spend $155 on the FE cooler to mitigate the heat (per Igors Lab analysis) and they also have no Titan card this time around (because with the higher sized node there is no room for a larger more powerful die than the 3090) so they lost money on selling Titan cards and they will struggle to cool the mobile variants of this node in the laptops next year. They also pass the cost onto the consumer in the form of much higher electricity costs.

Jim also points out that the 3090 is essentially a rebranded $1500 80 Ti (it's clearly not a Titan card, irrespective of their allusion to the Titan in their marketing), so yes, they are basically shafting everyone again with ridiculous prices.

Ultimately they didn't save anything and this was a really stupid decision.

The video is absolutely worth a watch, you can turn on closed captioning if you struggle with Jim's thick Irish accent.

This is 100% spot on journalism here, there is no error prone speculating, it's very factual.

I tried to post this over at r/Nvidia and the mods took it down within 5 minutes.
The issue with all those assumptions, are they are based on opinion only, without actual real knowledge of what is going on in the background.

Even if 7nm was dirt cheap, but nvidia couldn't get enough allocation to be able to release before AMD, and maybe even before the end of the year, than nvidia have zero reason to go with TSMC.
TSMC are already full up to their neck with orders from apple and AMD, broadcom, qualcomm, and dozens of other companies.
Remember that the whole reason tuning and pascal were not 7nm, was exactly that reason, that they could get 7nm allocation from TSMC.

And the claim that 3090 is a rebranded 80 TI, is the whole reason why you shouldn't watch that video. That is just some load of BS of someone who just wants clicks through "shocking" and "outrageous" videos.

100% spot on. Oh please.
Maybe it was taken down for a good reason.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
386 Posts
Surprise! Ampere doesn't clock higher than Turing! This will be confirmed in 7 hours!

Pertinent details?

Let's see GA-102-300 is clocked so aggressively from the factory that not much more can be squeezed out of it on air an the "reference" variants are limited to 350w!

320 to 350 is a 9.4% increase in power whereas 260 to 320 (2080 Ti FE) is a 23% increase in power!

So youre already 15% into a 23% overclock!

That means that the 27% average is reduced by the difference of 14% or rather, overclocked 2080 Ti is going to only be around 14% slower than overclocked 3080 in rasterization!

How is this any better than the difference between 1080 Ti and the 2080?!

What's interesting is that overclocked 2080 Ti FE @ 15,200 is 13% less than overclocked 3080 @ 17,200 which just so happens to line up with the 14% overclocked 2080 Ti vs overclocked 3080! above!

Please bear in mind that I'm probably picking up the 3090, so please do not conflate this as something other than brutally honest analysis and please save your hi-falutin comments about "only the wealthy people like me will buy the finest electronics!"

We can share rigs if you like:

Here's mine, can't wait to see yours:
+1

It's kind of ironic that these cards will be effectively what AMD have been criticized - and rightly so - over the past few generations.

Power Hungry, hot, limited OC, and clocked aggressively out of the box to squeeze every bit of performance out.

NVidia are the new AMD?

I'm confused.

But +1 to your post as you're absolutely spot on.
 

·
Registered
Joined
·
1,657 Posts
Lol, why is he releasing these type of speculative videos mere hours before reviews hit?

In a few hours we can find out the truth of all this by looking at OC reviews.
 
  • Rep+
Reactions: Intoxicus

·
PC Evangelist
Joined
·
46,706 Posts
Can you point to the information accurately showing the prices of 7nm TSMC and 8nm EUV?



Yes agreed, people will still buy it, including myself (still getting the 3090).

Although I'm still buying the product I believe that an informed and knowledgeable consumer-base that has these kinds of discussions is pursuant to a consumer empowerment.
The price came from a Semiwiki person.
 

·
Registered
Joined
·
1,522 Posts
And basically, yeah Jim from AdoredTV is correct, the 3090 is ultimately the 2080 Ti all over again, this time +300 on top of $1200, and at 375 with a 450-500w maximum power draw on the efficiency curve it wont overclock like TU-102 (from 260w to 475w). You might get another 20% out of it at 450w whereas TU-102 @ 2200 MHz is like a 31% overclock (17,500-17,750 Timespy GPU).

Basically NV had to pre-overclock the cards from the factory in order for Samsung 8nm EUV GA-102-300 to have a ~25% gap on TU-102 and for GA-102-200 to have a ~45% gap on it but when you overclock both cards reduce the final figure by 10% because TU-102 can overclock higher more (~30% vs ~20%).
The 2080 Ti was a significantly bigger increase when compared to the 2080 than the 3090 compared to the 3080, that alone makes the statement that "the 3090 is a 2080 Ti except more expensive" wrong.

RT is a different story (leaked bench shows the 3080 doing Port Royal 45% faster than 2080 Ti, but we may as well reduce this amount by 10% at least comparing overclocked 2080 Ti to the 3080 considering that Samsung B-Die can do +1000 MHz whereas Micron runs too hot / requires more voltage and doesn't clock as high.
What does Samsung B-die and Micron rev. E have to do with GPUs? You do realize those are DDR4 ICs, and GPUs use GDDR6 or GDDR6X?
 
  • Rep+
Reactions: Intoxicus

·
Overclocker
Joined
·
11,339 Posts
You should watch this when you get a chance, its damn good.

I saw.

Don't compare x80 classes of cards, they use different chip "tiers". 3080 is a GA102 not a GA104. The user facing naming/marketing puts what ever number/name they want. In the end it's best to look at what performance, efficiency, etc. one gets for a fixed amount of $$$. What ever name or chip it may be.

NV is in a bit of a "upsie" and resorted to this naming mess with Ampere. Does it matter, not really. Just look at what performance and efficiency you get for the money spent, then decide what to buy.

It seems to me that until 2021 it's not a good time to buy a GPU anyway. Wait for all the releases to be out and in stock, first issues resolved because who wants to be the next early adopter tester for an expensive GPU? The 2080Tis from what it seemed are dying constantly to this day and thanks to their size and complexity have highest failure rate. Anyone wanna guess what the failure rate is gonna be for GA102? Probably not pretty.

AdoredTV... sometimes the videos are good, sometimes not, sometimes it's sort of meant to entertain and speculate, take it for what it is, YouTube and not a hard science. Ever saw LTT on YouTube? Now that's a comedy show.
 

·
Premium Member
Joined
·
10,765 Posts
I saw.

Don't compare x80 classes of cards, they use different chip "tiers". 3080 is a GA102 not a GA104. The user facing naming/marketing puts what ever number/name they want. In the end it's best to look at what performance, efficiency, etc. one gets for a fixed amount of $$$. What ever name or chip it may be.

What? That vid I linked explains the progression of processes into the EUV age.
You must think I linked a different video??
Watch it when you get the time its really informative.

BTW, I've had LTT blocked since day one. He's a fool.
 

·
Premium Member
Joined
·
10,765 Posts
Basically NV had to pre-overclock the cards from the factory in order for Samsung 8nm EUV GA-102-300 to have a ~25% gap on TU-102 and for GA-102-200 to have a ~45% gap on it but when you overclock both cards reduce the final figure by 10% because TU-102 can overclock higher more (~30% vs ~20%).
Samsung's 10nm node now 8nm is not EUV, just thought you might want to fix that.
 
  • Rep+
Reactions: Intoxicus

·
Registered
Joined
·
830 Posts
Discussion Starter #57
Well well well, the 3080 does not clock higher just like I stated, and those leaked benches? Yeah those were legit, making the 3080 only 25% faster than the 2080 Ti in rasterization.

Oh and hey, how about that non-existent overclock, a whopping +25 MHz on the core!

Look at that 500 point increase in Timespy!

What is this, 18,300 with an overclock?

Look at those screaming clocks! "1950-2000 MHz"

OOOH YEAH, IT'S CLOCKING WAY HIGHER THAN TURING.


"Absolute Hype Disappoints Absolutely" - LTT

 

·
Registered
Joined
·
830 Posts
Discussion Starter #59
"The 3080 is sitting at roughly 100w on average over the 2080 Ti" - LTT


GOOD LORD.

So much for 260 vs 320w.

Gee, I wonder how that 2080 Ti would perform with +100w?
 

·
Registered
Joined
·
830 Posts
Discussion Starter #60
It's fascinating watching confirmation bias in action.
It's even more interesting is that watching people who lost an argument employ such a pathetic statement.

Is that all you have?

Care to refute anything I've stated since BEFORE the reviews went live?

Oh that's right, you can't, instead you attempt to personally attack the person who has shattered your fault perception with statements that liken critical, objective analysis as "bias".
 
41 - 60 of 96 Posts
Top