Overclock.net banner

[Official] NVIDIA RTX 4070 Ti Owner's Club

13164 Views 172 Replies 39 Participants Last post by  SSJVegeta
Last Updated: January 8, 2023

Note: This content is licensed under Creative Commons 3.0. This means that you are free to copy and redistribute this material, but only if the following criteria are met: 1) You must give appropriate credit by linking back to this thread. 2) You may not use this material for commercial purposes or place this on a for-profit website with ads. 3) You cannot create derivative work based on this material.

NVIDIA GeForce® RTX 4070 Ti


⠀⠀→ RTX 4070 Ti Owner's Club
RTX 4080 Owner's Club
RTX 4090 Owner's Club

Click here to join the discussion on Discord or join directly through the Discord app with the code kkuFR3d


Source: NVIDIA

SPECS (Click Spoiler)

Rich (BB code):
 
   Architecture Ada Lovelace
   Chip AD104-400
   Transistors 35,800 million
   Die Size 295 mm²
   Manufacturing Process 4nm

   CUDA Cores 7680
   TMUs 240
   ROPs 80
   SM Count 60
   Tensor Cores 240
   GigaRays -- GR/s

   Core Clock 2310 MHz
   Boost Clock 2610 MHz
   Memory 12GB GDDR6X
   Memory Bus 192-bit
   Memory Clock 1313 MHz / 21008 MHz
   Memory Bandwidth 504 GB/s
   External Power Supply 16-Pin
   TDP 285W

   DirectX 12.2 Ultimate
   OpenGL 4.6
   OpenCL 3.0
   Vulkan 1.3
   CUDA 8.9

   Interface PCIe 4.0 x16
   Connectors 1x HDMI 2.1, 3x DisplayPort 1.4a
   Dimensions Non-Available

   Price $799 US

   Release Date January 5, 2023

Rich (BB code):
RTX 4090    | AD102-300 |  4nm | 608mm² | 76.3 BT | 16384 CCs | 512 TMUs | 176 ROPs | 128 SMs | 2520 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
RTX 4080    | AD103-300 |  4nm | 379mm² | 45.9 BT |⠀ 9728 CCs | 304 TMUs | 112 ROPs | ⠀76 SMs | 2505 MHz |  16GB | 2048MB x 8  | GDDR6X | 256-bit | ⠀716 GB/s | 320W⠀⠀
RTX 4070 Ti | AD104-400 |  4nm | 295mm² | 35.8 BT |⠀ 7680 CCs | 240 TMUs |  80 ROPs | ⠀60 SMs | 2610 MHz |  12GB | 2048MB x 6  | GDDR6X | 192-bit | ⠀504 GB/s | 285W⠀⠀
RTX 3090 Ti | GA102-350 |  8nm | 628mm² | 28.3 BT | 10752 CCs | 336 TMUs | 112 ROPs | ⠀84 SMs | 1865 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
RTX 3090    | GA102-300 |  8nm | 628mm² | 28.3 BT | 10496 CCs | 328 TMUs | 112 ROPs | ⠀82 SMs | 1695 MHz |  24GB | 1024MB x 24 | GDDR6X | 384-bit | ⠀936 GB/s | 350W⠀⠀
RTX 3080 Ti | GA102-250 |  8nm | 628mm² | 28.3 BT | 10240 CCs | 320 TMUs | 112 ROPs | ⠀80 SMs | 1665 MHz |  12GB | 1024MB x 12 | GDDR6X | 384-bit | ⠀912 GB/s | 320W⠀⠀
RTX 3080    | GA102-200 |  8nm | 628mm² | 28.3 BT |⠀ 8704 CCs | 272 TMUs |  96 ROPs | ⠀68 SMs | 1710 MHz |  10GB | 1024MB x 10 | GDDR6X | 320-bit | ⠀760 GB/s | 320W
Note: Gaming performance on Ampere and later, do not scale linearly with CUDA core count when compared with previous generations.
Rich (BB code):
RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT |⠀ 4352 CCs  | 272 TMUs |  88 ROPs | ⠀68 SMs | 1635 MHz |  11GB | 1024MB x 11 | GDDR6  | 352-bit | ⠀616 GB/s | 250W
RTX 2080 S  | TU104-450 | 12nm | 545mm² | 13.6 BT |⠀ 3072 CCs  | 192 TMUs |  64 ROPs | ⠀48 SMs | 1815 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | ⠀496 GB/s | 250W
RTX 2080    | TU104-400 | 12nm | 545mm² | 13.6 BT |⠀ 2944 CCs  | 184 TMUs |  64 ROPs | ⠀46 SMs | 1710 MHz |   8GB | 1024MB x 8  | GDDR6  | 256-bit | ⠀448 GB/s | 215W
GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12.0 BT |⠀ 3584 CCs  | 224 TMUs |  88 ROPs | ⠀28 SMs | 1582 MHz |  11GB | 1024MB x 11 | GDDR5X | 352-bit | ⠀484 GB/s | 250W
GTX 1080    | GP104-400 | 16nm | 314mm² |  7.2 BT | ⠀2560 CCs  | 160 TMUs |  64 ROPs | ⠀20 SMs | 1733 MHz |   8GB | 1024MB x 8  | GDDR5X | 256-bit | ⠀320 GB/s | 180W
GTX 980 Ti  | GM200-310 | 28nm | 601mm² |  8.0 BT |⠀ 2816 CCs  | 172 TMUs |  96 ROPs | ⠀22 SMs | 1076 MHz |   6GB |  512MB x 12 | GDDR5  | 384-bit | ⠀336 GB/s | 250W
GTX 980     | GM204-400 | 28nm | 398mm² |  5.2 BT |⠀ 2048 CCs  | 128 TMUs |  64 ROPs | ⠀16 SMs | 1216 MHz |   4GB |  512MB x 8  | GDDR5  | 256-bit | ⠀224 GB/s | 165W
GTX 780 Ti  | GK110-425 | 28nm | 551mm² |  7.1 BT |⠀ 2880 CCs  | 240 TMUs |  48 ROPs | ⠀15 SMs |  928 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit |⠀ 336 GB/s | 250W
GTX 780     | GK110-300 | 28nm | 551mm² |  7.1 BT | ⠀2304 CCs  | 192 TMUs |  48 ROPs | ⠀12 SMs |  900 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit |⠀ 288 GB/s | 250W
GTX 680     | GK104-400 | 28nm | 294mm² |  3.5 BT |⠀ 1536 CCs  | 128 TMUs |  32 ROPs |  ⠀8 SMs | 1058 MHz |   2GB |  256MB x 8  | GDDR5  | 256-bit | ⠀192 GB/s | 200W
GTX 580     | GF110-375 | 40nm | 520mm² |  3.0 BT |  ⠀512 CCs  |  64 TMUs |  48 ROPs | ⠀16 SMs |  772 MHz | 1.5GB |  128MB x 12 | GDDR5  | 384-bit | ⠀192 GB/s | 250W

ASUS
AsusTek Computer (stylised as ASUS) was founded in Taipei, Taiwan in 1989, currently headquartered in Taipei, Taiwan.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
Strix OC336mm3.15322285/365WCustomMP2888A16×70A (1120A) PMC415703×50A (150A) NCP30315190YV0II0-M0NA00
TUF OC305mm3.25322285/314WCustomuP9512R11×50A (550A) SiC6392×50A (100A) SiC63990YV0IJ0-M0NA00

COLORFUL - Not available in Europe or North America
Colorful Group (referred to as CFG) was founded in Shenzhen, China in 1995, currently headquartered in Shenzhen, China.


ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
Neptune OC254mm2.00AIO1285/320WN/A
Vulcan OC349mm3.0531285/310WN/A
NB EX327mm3.031285/285WN/A

GALAX | KFA2 - Not available in North America
GALAXY was founded in Hong Kong, China in 1994, GALAXY and its European brand KFA2 (Kick Friggin Ass) merged in 2014 to form GALAX as a single unified brand, the name KFA2 still exist for the European market but all designs are GALAX, currently headquartered in Hong Kong, China.


ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
SG336mm3.10311285/330W47IOM7MD6MSG
SG336mm3.10311285/330W47IOM7MD6MSK
EX336mm3.0031147IOM7MD7AEG
EX336mm3.0031147IOM7MD7AEK

GIGABYTE
GIGA-BYTE Technology (stylised as GIGABYTE) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan and California, United States.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
Master342mm3.55312GV-N407TAORUS M-12GD
Elite342mm3.55312GV-N407TAORUS E-12GD
Gaming OC336mm2.90312285/340WCustomuP9512R10×50A (500A) SiC653A3×50A (150A) SiC653AGV-N407TGAMING OC-12GD
Aero OC336mm2.90312GV-N407TAERO OC-12GD
Eagle OC301mm2.90312GV-N407TEAGLE OC-12G

INNO3D
InnoVISION Multimedia was founded in Hong Kong, China in 1989, primarily recognized for its graphic cards marketed under the Inno3D brand, acquired by PC Partner in 2008, currently headquartered in Hong Kong, China.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
iCHILL X3334mm3.10311C407T3-126XX-186148H
X3 OC297mm2.10311N407T3-126XX-186148N

MSI
Micro-Star International (stylised MSI) was founded in Taipei, Taiwan in 1986, currently headquartered in Taipei, Taiwan.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
Suprim X338mm3.65312285/365WCustomuP9512R12×55A (660A) AOZ5311NQI3×55A (165A) AOZ5311NQIGX-392-MS
Gaming X Trio337mm3.10312285/305WCustomuP9512R10×55A (550A) AOZ5311NQI2×55A (110A) AOZ5311NQIGX-391-MS
Ventus OC308mm2.60311285/285WGX-393-MS

PALIT | GAINWARD - Not available in North America
Palit Microsystems (stylised PaLiT) was founded in Taipei, Taiwan in 1988, acquired the Gainward brand and company in 2005, currently headquartered in Taipei, Taiwan.


ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
GameRock OC329mm3.25312CustomNED407TU19K9-1045G
Phantom GS329mm3.15312CustomNED407TU19K9-1045P
GamingPro OC329mm3.15311285/300WCustomuP9512R9×50A (450A) NCP3021502×50A (100A) NCP302150NED407TT19K9-1043A
Phoenix GS329mm3.20311285/300WCustomuP9512R9×50A (450A) NCP3021502×50A (100A) NCP302150NED407TT19K9-1043X

PNY
PNY Technologies was founded in New York, United States in 1985, currently headquartered in New Jersey, United States.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
XLR8 OC332mm3.35311285/285WCustomuP9512R9×50A (450A) NCP3021502×50A (100A) NCP302150VCG4070T12TFXXPB1-O
Verto305mm3.05311285/285WVCG4070T12TFXPB1

ZOTAC
ZOTAC is under the umbrella of PC Partner, and was founded in Hong Kong, China in 2006, currently headquartered in Hong Kong, China.

ModelLengthSlotFanHDMIBIOSPower LimitPCBPWMGPU StageVRAM StageMPN
AMP Extreme356mm3.60312285/365WCustomuP9512R24×55A (1320A) AOZ5311NQI3×55 (165A) AOZ5311NQIZT-D40710B-10P
AMP308mm2.95312CustomuP9512RZT-D40710F-10P
Trinity OC307mm2.95312CustomuP9512RZT-D40710J-10P

TECHPOWERUP | GPU-Z

Download TechPowerUp GPU-Z

NVIDIA | NVFLASH

Download NVIDIA NVFlash

BIOS | ROM

TechPowerUp BIOS Collection < Verified

TechPowerUp BIOS Collection < Unverified

OVERCLOCKING | TOOLS

Download ASUS GPUTweak III

Download Colorful iGame Center

Download Gainward EXPERTool

Download Galax/KFA2 Xtreme Tuner Plus

Download Gigabyte AORUS Engine

Download Inno3D TuneIT

Download MSI Afterburner

Download Palit ThunderMaster

Download PNY Velocity X

Download Zotac FireStorm
  • Rep+
Reactions: DarkPoe and Miao
1 - 20 of 173 Posts

· Registered
Joined
·
892 Posts
Launched at $799, if you can find them at that price. Faster than a 3090ti according to Nvidia with charts of games using DLSS3.

Still I would have a bad taste if I paid $1999 for a 3090ti back in March.

Purple Electronic instrument Musical instrument accessory Audio equipment Entertainment
 

· Registered
Joined
·
4,350 Posts
Launched at $799, if you can find them at that price. Faster than a 3090ti according to Nvidia with charts of games using DLSS3.

Still I would have a bad taste if I paid $1999 for a 3090ti back in March.
So, not faster than a 3090Ti, then, unless using the Shiny New Thing?

I somewhat agree, but you've had nearly a year of use out of a 3090Ti, and bought at a time when prices were more than a little crazy... sooo... prices here are still a bit mental. Fluctuate a lot. But at least it's possible to get some cards for near their JP MSRP.

edit: Love that cat.
 

· Enthusiast
Joined
·
383 Posts
Discussion Starter · #4 ·
I suspect the memory bandwidth is what is going to hold it back, but it could still beat the 3090/Ti.

3090 has 37% more CCs/SMs..
3090 bandwidth is 85% higher.
4070 Ti core clock is 54% higher (and higher overclocking headroom).

I'm assuming it's using the same memory chips as the 4090, that could take them from 21 Gbps to 25 Gbps (+20%), so could see 600 GB/s bandwidth (still ~150 less than overclocked G6 on 2080 Ti).

With GPU performance rivaling the 3090/Ti (~50% faster than a 2080 Ti), I just can't see how the 600 GB/s memory bandwidth will not hold it back to some extent. The 4090 is already held back a lot and looks like 4070 Ti might be too.

Either way, 4070 Ti sounds amazing on paper compared to the 4080, the efficiency/price performance will be ridiculously good, the same power connector on all cards too which means BIOS flashing will be completely "open" (flash any BIOS to any card). We might even see very high power limits on some BIOSes if lucky (420 or above), since the 16-pin connector is more or less unlimited, all up to the partners, how big the VRM is I guess, and going by the images that has popped up, almost all cards are custom PCBs with large VRMs, would not surprise me if we will see at least one BIOS that go up to 420-450W like on the 4080. Personally I'd probably downvolt it to 250W, get the cheapest card with the best cooler, tweak it for absolute peak efficiency, "should" still be able to beat a 3090/Ti (will easily beat them in RT/DLSS 3) and consume very little power/run very quiet.

The reason for me making this thread, which I typically don't, on lower cards than xx80, is because of all the custom PCBs, the cards are just as massive as the 4080/4090, similar PCB and VRM (supposedly).
 

· H₂O Aficionado
Joined
·
6,028 Posts
“$799 USD”
Font Screenshot Electronic device Camera Technology
Exchange rate = $888 for the cheapest AIB card.
Font Screenshot Slope Software Rectangle

It’ll need to compete with the 7900 XT as you can easily find XTs in stock for around $900. It may be a better value than a 4080 16GB but that is nothing to write home about.

Maybe in 8 months when they drop the price it’ll be interesting but buying an RTX 3090/ Ti level card with half the VRAM for at min. 888 USD is not going to entice a whole lot of people.
 

· Enthusiast
Joined
·
383 Posts
Discussion Starter · #7 ·
Just checked the reviews, the GN one is extremely disingenuous, ignore it. They're comparing one of the slowest custom cards (TUF) on the market against the second fastest 3090 Ti..

Looking at any other reviews, like Guru3D and SweClockers, 4070 Ti SuprimX and Gaming OC both beat the 3090 Ti FE with ease, both RT and Non-RT.

Something Steve barely pointed out, is how much more efficient the 4070 Ti is.

EVGA 3090 Ti FTW3 = 71°C Avg Temp with fan RPM at ~1650 and power consumption at a ridiculous 503W.
ASUS 4070 Ti TUF = 61°C Average Temp with the fan RPM at ~1400 and power consumption at just 290W.

Remember.. these cards perform about the same in the above comparison, give or take 5-10% depending on title and which model (factory overclock) you get. But it's safe to say that on average they're trading blows, with a slight advantage in RT and DLSS to the 4070 Ti.

It's thus obvious that the 4070 Ti is a significantly better card than any 3090 Ti, as it pulls ~160-210W less power and run ~10°C cooler with quieter fans, while being a tad slower (the lower end TUF) than a (highest end) 3090 Ti FTW3, but faster than FE and other lower end 3090 Ti's.

Currently 3090 Ti is being sold used with 1 year warranty left for ~€950, and you should be able to get a 4070 Ti for roughly the same €950, which includes 2 years store warranty here in the EU.

As for the 3090, they are cheaper but also significantly slower (non OC models at 350W), no warranty and they still go for about €800, I'd personally gladly pay €150 more for a brand new 2 year warranty 4070 Ti that is up to 10-15% faster.

Then the question become, how much will the 3090 go down in price on the used market? I would not touch one for anything above €700, not even a Strix OC. Going to be interesting seeing where they end up.
 

· WaterCooler
Joined
·
5,693 Posts
Then the question become, how much will the 3090 go down in price on the used market? I would not touch one for anything above €700, not even a Strix OC. Going to be interesting seeing where they end up.
I should hope the 3090 does go down in price, because if it and the 3090 Ti don't, its just gen on gen price/performance stagnation. Also as impressive as the performance is for the power, I still don't think the real price of $850-$950 I am sure these cards will really go for will ever feel great for a "70" branded card that is typically aimed at the mid-range gamer. Mid-range isn't close to $1k.
 

· H₂O Aficionado
Joined
·
6,028 Posts
Honestly, the product it self appears to be great. Yes, a card drawing 50% power of a flag ship with similar performance is also great. The issue is simply pricing. For a XX70 series card, it should be cheaper. $699 makes more sense in the context last gen if you factor in high inflation.

The cold reality is that Ampere MSRP was simply unobtainable for the vast majority of consumers. If you look at what 3070's and 3080's actually sold for, then these prices sort of make sense. Seems odd that most reviewers are comparing these new cards to Ampere MSRP. Did they forget that a 3080 at $699 didn't really exist beyond the very beginning before tarrifs and mining went insane?

Rectangle Slope Font Line Parallel
Rectangle Slope Font Parallel Plot

The graphics card market needs to be corrected but it may take more time than people originally thought. A lot of people will skip Ada and RDNA 3 besides enthusiasts.
 

· Linux Lobbyist
Joined
·
1,084 Posts
The graphics card market needs to be corrected but it may take more time than people originally thought.
Exactly, just the pendulum of the mining boom swinging the other direction now (and it'll take longer, cause the mining disruption of the usual supply&demand was higher than the last time). Everything happening now, done by Nvidia and AMD is the correction, just not for the consumers. Everything done/everything happening is to correct the market for the companies by the companies.

The only thing that matters, that always mattered, is supply&demand. And they need to clear old stock before any other action (and by any means necessary). That's the only move from their side.

The only move from our side, the side of consumers is buying old stock, and used cards ASAP. the faster old stock clears, the faster used cards clear, the sooner the market returns to normal.
 

· Registered
i9-12900K | Z690-E | RTX 4080 FE
Joined
·
817 Posts
the GN one is extremely disingenuous, ignore it
Steve Burke is such a sweaty neckbeard. I want to brush his hair and give him a bath so badly.
 

· Registered
Joined
·
18 Posts
Why is it on 192 bit memory bus and not 256 tho that's kinda tilting. According to other OCN members, that's the reason its performance seems to dip at 4K.
That's how it works when you lose the 2 memory modules. Just to be clear easy math you get 32 bit per memory chip.
6x 32 = 192 and 12GB cause there 2GB chips. 4080 8x 32 =256 and 16 GB.

4080 is also memory bandwidth limited and respond to memory overclocks.
 

· Registered
Joined
·
890 Posts
That's how it works when you lose the 2 memory modules. Just to be clear easy math you get 32 bit per memory chip.
6x 32 = 192 and 12GB cause there 2GB chips. 4080 8x 32 =256 and 16 GB.

4080 is also memory bandwidth limited and respond to memory overclocks.
I'm no expert on such stuff I will admit, thanks for the explanation. However there are cards with less VRAM and yet more memory bandwidth ( like the 3070ti that is 256 bit at 8 GB ). Was there something limiting them on the 4070ti and the 4080 or is it just incompetence from nVidia?
 

· Registered
Joined
·
4,350 Posts
I'm no expert on such stuff I will admit, thanks for the explanation. However there are cards with less VRAM and yet more memory bandwidth ( like the 3070ti that is 256 bit at 8 GB ). Was there something limiting them on the 4070ti and the 4080 or is it just incompetence from nVidia?
No. Wider memory buses cost more money and power. More money to design, more money to make (because silicon defects happen) and are more power hungry in operation.

Chances are, if someone x-ray'd the silicon, they'd find it's a 256-bit bus, with two channels/modules/whatever-you-want-to-call-them disabled to improve yields. That also gives the option of another model ("Super") in the future if/when yields improve enough.
 

· Registered
Joined
·
18 Posts
I'm no expert on such stuff I will admit, thanks for the explanation. However there are cards with less VRAM and yet more memory bandwidth ( like the 3070ti that is 256 bit at 8 GB ). Was there something limiting them on the 4070ti and the 4080 or is it just incompetence from nVidia?
Yeah 3070ti is 1gb chips and theres 8 of them. It just a money design decision by Nvidia. Like 3070ti wasn't a great card for the price etc either. Also remember memory tech and speed matter as well as bus with its a balancing act.
 

· Enthusiast
Joined
·
383 Posts
Discussion Starter · #18 · (Edited)
Honestly, the product it self appears to be great. Yes, a card drawing 50% power of a flag ship with similar performance is also great. The issue is simply pricing. For a XX70 series card, it should be cheaper. $699 makes more sense in the context last gen if you factor in high inflation.
People need to forget the naming, kinda.

1920 CCs | G5 | 8GB | 256-bit | 150W | $450 = 1070
2432 CCs | G5 | 8GB | 256-bit | 180W | $450 = 1070 Ti (This card is literally a 1080 but with cheaper memory)
2560 CCs | G5X | 8GB | 256-bit | 180W | $600 = 1080
3584 CCs | G5X | 11GB | 352-bit | 250W | $700 = 1080 Ti
3840 CCs | G5X | 12GB | 384-bit | 250W | $1200 = Titan Xp (This card is basically the GTX 1090)

There is a clear difference between the 1070, 1080 and 1080 Ti (1920 > 2560 > 3584 and 3840 CCs on Titan Xp)
The 1070 Ti is basically a re-release of the 1080, so we can look at it like this:
1070 > 1080 > 1080 Ti > 1090

2304 CCs | G6 | 8GB | 256-bit | 175W | $500 = 2070
2560 CCs | G6 | 8GB | 256-bit | 215W | $500 = 2070 SUPER (This card is literally a 2070 but factory overclocked)
2944 CCs | G6 | 8GB | 256-bit | 215W | $700 = 2080
3072 CCs | G6 | 8GB | 256-bit | 250W | $700 = 2080 SUPER (This card is literally a 2080 but factory overclocked)
4352 CCs | G6 | 11GB | 352-bit | 250W | $1000 = 2080 Ti
4608 CCs | G6 | 24GB | 384-bit | 280W | $2500 = Titan (This card is basically the RTX 2090)

There is a clear difference again, between the 2070, 2080 and 2080 Ti (2304 > 2944 > 4352 and 4608 CCs on Titan)
You can ignore the SUPER models, they are basically the same card but with higher power limit out of box, then we can look at the 20 series like this:
2070 > 2080 > 2080 Ti > 2090

5888 | G6 | 8GB | 256-bit | 220W | $500 = 3070
6144 | G6X | 8GB | 256-bit | 290W | $600 = 3070 Ti (This card just has faster memory and higher overclock)
8704 | G6X | 10GB | 320-bit | 320W | $700 = 3080
8960 | G6X | 12GB | 384-bit | 350W | $800 = 3080 12GB (This is basically 3070 Ti and the 3080 is 3070)
10240 | G6X | 12GB | 384-bit | 350W | $1200 = 3080 Ti
10496 | G6X | 24GB | 384-bit | 350W | $1500 = 3090 (This is basically a Titan)
10752 | G6X | 24GB | 384-bit | 450W | $2000 = 3090 Ti (This is a 3090 but overclocked)

There are the normal jumps, 5888 > 8704 > 10240.

7680 | G6X | 12GB | 192-bit | 285W | $800 = 4070 Ti
9728 | G6X | 16GB | 256-bit | 320W | $1200 = 4080
16384 | G6X | 24GB | 384-bit | 450W | $1600 = 4090

Again jumps but with an exception for the 4090 which is absurd.

Let's try to summarize it,

GTX 1000 Series
1920 CCs for the 1070 | G5 | 8GB | 256-bit | 150W | $450
2432-2560 CCs for the 1070 Ti and 1080 | G5/G5X | 8GB | 256-bit | 180W | $450/600
3584-3840 CCs for the 1080 Ti and Titan | G5X | 11/12GB | 352/384-bit | 250W | $700/1200
Ok so, we have 3 separate GPUs clearly, 1070, 1080 and 1090
1070 = 1920 CCs | 8GB | 256-bit | 150W | $450
1080 = 2432 CCs | 8GB | 256-bit | 180W | $600
1090 = 3584 CCs | 11GB | 352-bit | 250W | $700
What is noteworthy here is how little power the 70 and 80 card consumed, especially the 70, that card can run on potato VRM (extremely cheap, which is why the card itself could be sold for just $450).

RTX 2000 Series
2070 = 2304 CCs | G6 | 8GB | 256-bit | 175W | $500
2080 = 2944 CCs | G6 | 8GB | 256-bit | 215W | $700
2090 = 4352 CCs | G6 | 11GB | 352-bit | 250W | $1000
Again, we clearly have 3 separate GPUs.

Differences:
1070 to 2070 = Faster memory, 25W higher power consumption, $50 higher price
1080 to 2080 = Same memory, 35W higher power consumption, $100 higher price
1090 to 2090 = Same memory, same power consumption, $300 higher price

RTX 3000 Series
5888-6144 CCs for the 3070 and 3070 Ti | G6/G6X | 256-bit | 220/290W | $500/600
8704-8960 CCs for the 3080 and 3080 12GB | G6X | 320/384-bit | 320/350W | $700/800
10240-10752 CCs for the 3080 Ti, 3090 and 3090 Ti | G6X | 384-bit | 350/450W | $1200/1500/2000
As expected.
3070 = 5888 CCs | G6 | 256-bit | 220W | $500
3080 = 8704 CCs | G6X | 320-bit | 320W | $700
3090 = 10240 CCs | G6X | 384-bit | 350W | $1200

Differences:
2070 to 3070 = Slower memory, 45W higher power consumption, same price
2080 to 3080 = Same memory, 105W higher power consumption, same price
2090 to 3090 = Same memory, 100W higher power consumption, $200 higher price

Now we can see that what really changes is the power consumption.

RTX 4000 Series
So, 4070 Ti was originally named 4080 12GB, but it was clearly never a 4080, since it has far fewer CCs, it perfectly matches a 4070, so that is what I will call it here, a 4070.
Differences:
3070 to 4070 = Faster memory, 65W higher power consumption, $300 higher price
3080 to 4080 = Same memory, same power consumption, $500 higher price
3090 to 4090 = Same memory, 100W higher power consumption, $400 higher price
It is definitely a bit disingenuous to compare the 3090 to the 4090, as the 3090 is a "Ti" of the 3080 Ti, what I am comparing is the slowest card of that CC range, we just have to wait for a 4080 Ti with about 15K CCs maybe.

On one hand it makes sense that the cards cost more, because the power keeps increasing significantly, which costs money (PCB/VRM).

Another summarization,
1070/Ti = 25% fewer CCs than the middle child (1080)
2070/Ti = 13% fewer CCs than the middle child (2080)
3070/Ti = 29% fewer CCs than the middle child (3080)
4070/Ti = 21% fewer CCs than the middle child (4080)

1070 = 150W
2070 = 215W (+65W from 1070/Ti)
3070 = 290W (+75W from 2070/Ti)
4070 = 285W (-5W from 3070/Ti)

1070/Ti = 8GB
2070/Ti = 8GB
3070/Ti = 8GB
4070/Ti = 12GB (+4GB)

1070/Ti = $450
2070/Ti = $500 (+$50)
3070/Ti = $600 (+$100)
4070/Ti = $800 (+$200)

1070/Ti is cut down a lot from the xx80, pulls very little power and is cheap
2070/Ti is barely cut down from the xx80, pulls a lot more power but is still cheap
3070/Ti is cut down a lot from the xx80, pulls a lot more power but costs more
4070/Ti is not as much cut down from the xx80, pulls less power and has 50% more VRAM but costs a lot more

3DMark Time Spy Graphics Score
1070/Ti = 5700
2070/Ti = 10100 (+77% higher performance than 1070/Ti)
3070/Ti = 14500 (+44% higher performance than 2070/Ti)
4070/Ti = 22500 (+55% higher performance than 3070/Ti)

This is really important to note, for 3070 to be 44% faster than 2070, it had to increase power consumption by 75W, but for the 4070 to be 55% faster than the 3070, it could REDUCE power consumption by 5W, not add another 75W, on top of that it has 4GB larger VRAM, just those two things combined explains why the card costs more, NVIDIA managed to squeeze out more performance than last gen without increasing power consumption, instead reduce it, that's extremely impressive, and it shows, the TUF in the GN review ran 61c avg at 290W with the fans at just 1400RPM, that's a crazy amount of performance for such a low temperature and power consumption.

I would have praised NVIDIA for the 4070 Ti if the price would have been $699, would've been a $100 bump from last gen but 4GB more VRAM and lower power consumption while still delivering 55% higher performance, that's as said, impressive. If you want a cheaper card just wait for the xx60.

xx80 summarization,
1080 = 28% fewer CCs than big brother (1090)
2080 = 32% fewer CCs than big brother (2090)
3080 = 15% fewer CCs than big brother (3090)
4080 = 40% fewer CCs than big brother (4090)

1080 = 180W
2080 = 215W (+35W from 1080)
3080 = 320W (+105W from 2080)
4080 = 320W

1080 = 8GB
2080 = 8GB
3080 = 10GB (+2GB)
4080 = 16GB (+6GB)

1080 = $600
2080 = $700 (+$100)
3080 = $700
4080 = $1200 (+$500)

1080 is cut down a lot from the xx90, pulls very little power and is cheap
2080 is cut down a lot from the xx90, pulls little power but costs more
3080 is barely cut down from the xx90, pulls a lot more power and has 25% more VRAM and costs the same
4080 is cut down significantly from the xx90, pulls the same power and has 60% more VRAM but costs a lot more

3DMark Time Spy Graphics Score
1080 = 7300
2080 = 11000 (+50% higher performance than 1080)
3080 = 17800 (+62% higher performance than 2080)
4080 = 27800 (+56% higher performance than 3080)

Again, NVIDIA managed to squeeze out 56% higher performance without increasing the power consumption, on top of that not only 25% increased VRAM like last generation, but 60%! It makes a lot of sense why the card costs a lot more, but that still doesn't justify what it costs. It also makes sense why 3080 cost the same as the 2080, the 3080 is clearly a "bad card" since the only way they could achieve 62% higher performance was to dramatically increase power consumption by 49% and barely increase VRAM size, the card simply run a lot hotter and louder and partner cards cost more.

3080 was clearly an inferior product to the 4080, so it makes complete sense why the 3080 didn't increase in price and the 4080 did. But who is to say how much that improvement is worth, $500 is a lot more (71%), I would probably have been completely fine with them charging $999, then it'd be $300 more for a much better product, 60% larger VRAM and 56% faster without adding any power/heat/noise, gaming 6 hours a day for 4 years, skipping 1 gen: 880kWh saved, 15 cents per kWh = $132 saved from NVIDIA not increasing power consumption by 100W to achive the 50-62% performance target gen over gen. So let's say the card was $999, and you saved $100+ over the time you kept it, then it'd be $900, so "just" $200 more than previous gen. That sounds fantastic to me, too bad it's not $999 though, $1199 is definitely too high).

xx90 summarization,
1090 = 250W
2090 = 250W
3090 = 350W (+100W higher power consumption)
4090 = 450W (+100W higher power consumption)

1090 = 11GB
2090 = 11GB
3090 = 11GB
4090 = 24GB (There is no Ti yet with less VRAM)

1090 = $700
2090 = $1000 (+$300)
3090 = $1200 (+$200)
4090 = $1600 (There is no Ti yet that shaves off a few hundred)

1090 = 9500
2090 = 13600 (+43% higher performance than the 1090)
3090 = 20100 (+48% higher performance than the 2090)
4090 = 29200 (There is no Ti yet, also CPU bottlenecking this card)

Can't really comment on the 4090 since there is no cheaper alternative in the 4090 family (a cut down 4090 called 4080 Ti).

The RTX 2080 Ti can be commented on though, same power consumption and 43% higher performance is very impressive and it cost $300 more (+43%). Same situation as now, NVIDIA didn't have to increase power consumption to get a near 50% performance leap gen over gen. 3090 (3080 Ti) can definitely be seen as a failed card, it got a little more performance than the gen before it but had to add 100W power consumption to do it, which also increased the price, costs to beef up the VRM to handle that TDP.

Conclusion is that people should have been far more pissed at the RTX 30 series costs, as the cards only reached those performance targets by increasing power/heat/noise, we had to "pay" for that. This time when we pay more we at least get something for it, no additional power/heat/noise but the same performance target increase. I'm absolutely fine with them increasing prices but just not this much, $699 for the 4070 Ti, $999 for the 4080 and $1249 for the 4080 Ti and $1499 for the 4090 sounds reasonable.. maybe.

Either way no one can complain about the 4070 Ti prices right now, as they cost new what a used 3090 Ti costs. In a few weeks the 3090/3090 Ti prices should have gone down a fair bit and by that point the 4070 Ti won't be as appealing anymore. But for now you'd be out of your mind to buy a used 3090/3090 Ti for the same price as a 4070 Ti, roughly the same performance but at 200W higher power consumption (and the increased heat/noise that comes with it).
 

· Registered
Joined
·
4,350 Posts
People need to forget the naming, it clearly doesn't matter anymore.
It never did, but people like labels.

And "4070Ti" is easier than "AD104-60-7680-240/80/240/60-192/12G" (which would be [die]-[streaming multiprocessors]-[CUDA cores]-[TMU/ROP/Tensor/RT cores]-[memory bus/VRAM quantity]). ;)
 

· Enthusiast
Joined
·
383 Posts
Discussion Starter · #20 ·
And "4070Ti" is easier than "AD104-60-7680-240/80/240/60-192/12G".
To really dumb the names down:

1070
2070 (2070/2070 SUPER)
3070 (3070/3070 Ti)
4070 (4070/4070 Ti)

1080 (1070 Ti/1080)
2080 (2080/2080 SUPER)
3080 (3080/3080 12GB)
4080

1090 (1080 Ti/Titan)
2090 (2080 Ti/Titan)
3090 (3080 Ti/3090/3090 Ti)
4090

I went from 1080 Ti (1090) to 2080 Ti (2090), skipped 3080 Ti (3090) but really want 4080 Ti (4090), refuse to pay as much as they cost, seriously contemplating getting a new 4070 Ti, extreme power/performance and absolutely demolishes any game in 1440p, will be a while before we get 4K OLED 240Hz.
 
1 - 20 of 173 Posts
Top