Overclock.net banner

[Official] NVIDIA RTX 5090 Owner's Club

79 reading
1.5M views 21K replies 613 participants last post by  Nizzen  
#1 · (Edited)
Last Updated: May 23, 2025

Note: This content is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). You are free to share, copy and redistribute the material in any medium or format under the following terms only: 1) Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. 2) NonCommercial — You may not use the material for commercial purposes (for example a for-profit website with ads. 3) NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.

NVIDIA GeForce® RTX 5090


⠀⠀RTX 5080 Owner's Club
→ RTX 5090 Owner's Club


Click here to join the discussion on Discord or join directly through the Discord app with the code kkuFR3d

Image

Source: NVIDIA

SPECS (Click Spoiler)

Rich (BB code):
 
   Architecture Blackwell
   Chip GB202-300-A1
   Transistors 92,200 million
   Die Size 750mm²
   Manufacturing Process 5nm

   CUDA Cores 21760
   TMUs 680
   ROPs 176
   SM Count 170
   Tensor Cores 680

   Core Clock 2017MHz
   Boost Clock 2407MHz
   Memory 32GB GDDR7
   Memory Bus 512-bit
   Memory Clock 28 Gbps
   Memory Bandwidth 1792 GB/sec
   TDP 575W

   Interface PCIe 5.0 x16
   Connectors 3x DP2.1b, 1x HDMI 2.1b,
   Dimensions 304mm x 137mm (2-Slot)

   Price $1999 US

   Release Date January 30, 2025

Rich (BB code):
RTX 5090    | GB202-300 |  5nm | 750mm² | 92.2 BT | 21760 CCs | 680 TMUs | 176 ROPs | 170 SMs | 2407 MHz |  32GB | 2048MB x 16 | GDDR7  | 512-bit | 1792 GB/s | 575W⠀⠀
RTX 4090    | AD102-300 |  5nm | 608mm² | 76.3 BT | 16384 CCs | 512 TMUs | 176 ROPs | 128 SMs | 2520 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
RTX 3090 Ti | GA102-350 |  8nm | 628mm² | 28.3 BT | 10752 CCs | 336 TMUs | 112 ROPs | ⠀84 SMs | 1865 MHz |  24GB | 2048MB x 12 | GDDR6X | 384-bit | 1008 GB/s | 450W⠀⠀
Note: Gaming performance on Ampere and later architectures does not scale linearly with CUDA core count when compared to previous generations.
Rich (BB code):
RTX 2080 Ti | TU102-300 | 12nm | 754mm² | 18.6 BT |⠀ 4352 CCs  | 272 TMUs |  88 ROPs | ⠀68 SMs | 1635 MHz |  11GB | 1024MB x 11 | GDDR6  | 352-bit | ⠀616 GB/s | 250W
GTX 1080 Ti | GP102-350 | 16nm | 471mm² | 12.0 BT |⠀ 3584 CCs  | 224 TMUs |  88 ROPs | ⠀28 SMs | 1582 MHz |  11GB | 1024MB x 11 | GDDR5X | 352-bit | ⠀484 GB/s | 250W
GTX 980 Ti  | GM200-310 | 28nm | 601mm² |  8.0 BT |⠀ 2816 CCs  | 172 TMUs |  96 ROPs | ⠀22 SMs | 1076 MHz |   6GB |  512MB x 12 | GDDR5  | 384-bit | ⠀336 GB/s | 250W
GTX 780 Ti  | GK110-425 | 28nm | 551mm² |  7.1 BT |⠀ 2880 CCs  | 240 TMUs |  48 ROPs | ⠀15 SMs |  928 MHz |   3GB |  256MB x 12 | GDDR5  | 384-bit |⠀ 336 GB/s | 250W
GTX 680     | GK104-400 | 28nm | 294mm² |  3.5 BT |⠀ 1536 CCs  | 128 TMUs |  32 ROPs |  ⠀8 SMs | 1058 MHz |   2GB |  256MB x 8  | GDDR5  | 256-bit | ⠀192 GB/s | 200W
GTX 580     | GF110-375 | 40nm | 520mm² |  3.0 BT |  ⠀512 CCs  |  64 TMUs |  48 ROPs | ⠀16 SMs |  772 MHz | 1.5GB |  128MB x 12 | GDDR5  | 384-bit | ⠀192 GB/s | 250W

ASUS
AsusTek Computer, stylized as ASUS, was founded in Taipei, Taiwan, in 1989, and is currently headquartered in Taipei, Taiwan.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
Astral LC OC289mm2.40AIO22600WAstralA / B / E31× MP86670 (80A)90YV0LW2-M0NA00
Astral OC358mm3.80422600WAstralA / B / E31× MP86670 (80A)90YV0LW0-M0NA00
TUF OC348mm3.60322600WTUFA / E / W31× SiC654A (50A)90YV0LY0-M0NA00

GIGABYTE
GIGA-BYTE Technology (stylised as GIGABYTE) was founded in Taipei, Taiwan, in 1986, and is currently headquartered in Taipei, Taiwan.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
XTREME WATERFORCE WB235mm2Water11600WGIGABYTE31× MP87993GV-N5090AORUSX-WB-32GD
XTREME WATERFORCE245mm2AIO11600WGIGABYTE31× MP87993GV-N5090AORUSX-W-32GD
MASTER ICE360mm3.75312600WGIGABYTE31× MP87993GV-N5090AORUSM-ICE-32GD
MASTER360mm3.75312600WGIGABYTE31× MP87993GV-N5090AORUS-M-32GD
GAMING OC342mm3.50312600WGIGABYTE29× MP87993GV-N5090GAMING-OC-32GD
WINDFORCE OC342mm3.25312600WGIGABYTE29× MP87993GV-N5090WF3OC-32GD

INNO3D
InnoVISION Multimedia was founded in Hong Kong, China, in 1989. It was acquired by PC Partner, in 2008, and is currently headquartered in Hong Kong, China.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
iCHILL FROSTBITE204mm2.00Water11600WReferenceA29× MP87993C50903-32D7X-1759FB
iCHILL X3334mm3.65311575WReferenceA29× MP87993C50903-32D7X-175967H
X3 OC333mm3.00311575WReferenceA29× MP87993N50903-32D7-17593928

MSI
Micro-Star International (stylised MSI) was founded in Taipei, Taiwan, in 1986, and is currently headquartered in Taipei, Taiwan.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
SUPRIM LIQUID SOC280mm2.55AIO12600WMSIA / B29× MP87993G5090-32SLS
SUPRIM SOC359mm3.80312600WMSIA / B29× MP87993G5090-32SPS
VANGUARD SOC357mm3.80312600WMSIA / B29× MP87993G5090-32VGS
GAMING TRIO OC359mm3.50312600WMSIA / B29× MP87993G5090-32GTC
VENTUS 3X OC325mm3.35311575WReferenceA29× MP87993G5090-32V3C

NVIDIA
Nvidia Corporation (stylised NVIDIA) was founded in California, United States, in 1993, and is currently headquartered in California, United States.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
Founders Edition304mm2.00211600WNVIDIAE29× MP87993900-1G144-2530-000

PALIT | GAINWARD - Not available in North America
Palit Microsystems (stylised PaLiT) was founded in Taipei, Taiwan, in 1988, acquired Gainward in 2005, and is currently headquartered in Taipei, Taiwan.


ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
GameRock OC332mm3.55312600WPalitA / B / W29× MP87993NE75090S19R5-GB2020G
Phantom GS332mm3.50312600WPalitA / B / W29× MP87993 NE75090S19R5-GB2020P

PNY
PNY Technologies was founded in New York, United States, in 1985, and is currently headquartered in New Jersey, United States.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
OC ARGB329mm3.50311600WPNYO29× MP87993VCG509032TFXXPB1-O
OC329mm3.50311600WPNYO29× MP87993VCG509032TFXPB1-O

ZOTAC
ZOTAC is under the umbrella of PC Partner, and was founded in Hong Kong, China, in 2006, and is currently headquartered in Hong Kong, China.

ModelLengthSlotFanHDMIBIOSPower LimitPCBWBPower StagesMPN
AMP Extreme INFINITY333mm3.50312600WZOTACA29× MP87993ZT-B50900B-10P
SOLID OC White330mm3.40312600WZOTACA29× MP87993ZT-B50900Q-10P
SOLID OC330mm3.40312600WZOTACA29× MP87993ZT-B50900J-10P

TECHPOWERUP | GPU-Z

Download TechPowerUp GPU-Z

NVIDIA | NVFLASH

Download NVIDIA NVFlash

BIOS | ROM

TechPowerUp BIOS Collection < Verified

TechPowerUp BIOS Collection < Unverified

OVERCLOCKING | TOOLS

Download ASUS GPUTweak

Download Colorful iGame Center

Download Gainward EXPERTool

Download Galax/KFA2 Xtreme Tuner Plus

Download Gigabyte AORUS Engine

Download Inno3D TuneIT

Download MSI Afterburner

Download Palit ThunderMaster

Download PNY VelocityX

Download Zotac FireStorm


COOLING | WATER BLOCKS

Alphacool

Bykski

EKWB

Watercool
 
#2 · (Edited)
Welcome to the Leaderboard for the GeForce RTX 5090 Owner's Club on Overclock.net! Here, you can showcase your GPU's prowess using two renowned benchmarks: 3DMark Steel Nomad (SN) and UNIGINE Superposition (SUPO). This leaderboard doubles as proof of your ownership of the mighty RTX 5090.
This initiative also aims to compare performance, overclocking capabilities, and thermal management across different models.

Leaderboard for Steel Nomad

RankUserBrandModelAverageCoreMemoryTempPowerDriverCPUDate
🥇​
@willverduzco
MSI​
Vanguard​
3082 MHz​
33 500​
35°C​
600W​
576.15​
13900KS​
April 27​
🥈​
@Menko22
Gigabyte​
Master​
2925 MHz​
31 400​
54°C​
600W​
572.47​
9800X3D​
March 2​
🥉​
@zhrooms
ASUS​
Astral​
2970 MHz​
34 000​
62°C​
600W​
576.40​
9800X3D​
May 15​
4​
@NBPDC505
ASUS​
2955 MHz​
32 900​
52°C​
600W​
572.47​
14900K​
February 25​
5​
@DigitalJack3t
ASUS​
2977 MHz​
32 400​
57°C​
600W​
572.47​
285K​
March 3​
6​
@mron0903
MSI​
Gaming Trio​
2955 MHz​
34 000​
56°C​
600W​
572.47​
14900K​
February 24​
7​
@el_marzocco
NVIDIA​
2842 MHz​
33 250​
45°C​
600W​
572.47​
12900K​
February 22​
8​
@mron0903
MSI​
2910 MHz​
31 600​
57°C​
600W​
572.16​
14900K​
February 8​
9​
@SizzlinChips
MSI​
SUPRIM​
2857 MHz​
34 000​
51°C​
600W​
572.47​
285K​
March 5​
10​
@Nd4spdvn
MSI​
Gaming Trio​
2872 MHz​
31 600​
54°C​
600W​
572.24​
9800X3D​
February 9​
11​
@Alelau18
NVIDIA​
Founders Edition​
2827 MHz​
31 800​
61°C​
600W​
572.47​
13900KF​
February 22​
12​
@Counterassy14
MSI​
2767 MHz​
32 000​
55°C​
600W​
572.24​
9800X3D​
February 6​
13​
14​


Leaderboard for Superposition

RankUserBrandModelScoreAverageTempPowerDriverCPUDate
🥇​
@willverduzco
MSI​
Vanguard​
4498​
37°C​
600W​
576.02​
13900KS​
April 27​
🥈​
@Menko22
Gigabyte​
Master​
4486​
60°C​
600W​
572.47​
9800X3D​
March 2​
🥉​
@DigitalJack3t
ASUS​
4438​
62°C​
600W​
572.47​
285K​
March 3​
4​
@NBPDC505
ASUS​
4433​
54°C​
600W​
572.47​
14900K​
February 25​
5​
@mron0903
MSI​
4420​
59°C​
600W​
572.47​
14900K​
February 24​
6​
@Panchovix
MSI​
Vanguard​
4415​
57°C​
600W​
572.47​
7800X3D​
March 3​
7​
@zhrooms
ASUS​
Astral​
4392​
53°C​
600W​
576.40​
9800X3D​
May 15​
8​
@Alelau18
NVIDIA​
Founders Edition​
4344​
64°C​
600W​
572.47​
13900KF​
February 22​
9​
@el_marzocco
NVIDIA​
4329​
47°C​
600W​
572.47​
12900K​
February 22​
10​
@Nd4spdvn
MSI​
Gaming Trio​
4329​
56°C​
600W​
572.24​
9800X3D​
February 9​
11​
@SizzlinChips
MSI​
SUPRIM​
4231​
54°C​
600W​
572.47​
285K​
March 5​
12​
@Counterassy14
MSI​
4156​
57°C​
600W​
572.24​
9800X3D​
February 6​
13​
@monkeyboy46800
NVIDIA​
Founders Edition​
4106​
67°C​
600W​
572.16​
7950X3D​
February 6​
14​
15​

Join us in this exciting benchmark challenge! Whether you're testing your overclocking limits, comparing against peers, or just curious about your GPU's capabilities, this leaderboard is for you. Let's push the limits of graphical performance together!
Remember, the more participants, the more insightful the data. Let's aim for at least 50 entries to make this a truly representative sample of performance metrics. Happy benchmarking!​

Here's how you can participate;
Note: You have to run both benchmarks, I need information shown in Superposition, and for the temperature comparison to be correct in Steel Nomad.

1. Benchmark Setup:
  • Download and Install:
    • 3DMark or 3DMark Demo: Download on Steam. Both versions include the Steel Nomad benchmark.
    • UNIGINE Superposition: Download and install. Use the settings in the image provided for a resolution of 7680x2160.
  • Run Order:
    • Start with Superposition, then Steel Nomad.
Image


2. Benchmark Execution: (you must run, and submit, both for it to be accepted)
  • UNIGINE Superposition:
    • Run once for warm-up, then run again for the actual benchmark. Screengrab the final result screen.
  • 3DMark Steel Nomad:
    • Run twice again for optimal temperature conditions; submit results from the second run.
    • Use the Snipping Tool (with a 3-second delay by clicking the clock icon) or similar to capture the FPS and graphs, scroll down on the results page and hover over the last second to see core clock, memory clock, and temperature. Capture the image like seen below, it should include Detailed scores and Detailed monitoring;
Image


3. Submission Process:
When submitting your results, please provide:
  • Which card are you running? (ASUS Astral LC/MSI Suprim LC)
  • Is it overclocked? (Yes/No)
    • If Yes, specify: "How much is it overclocked?" (e.g., "+300 on Core and +500 on Memory, voltage untouched")
    • "What is the power limit set to?" (e.g., "575W")
  • Did you run Superposition 2 times and screenshotted the second result? (Yes/No)
  • Did you run Steel Nomad 2 times and screenshotted the second result? (Yes/No)
Note: Use the "Attachment" button for screenshots, not the "Insert Image" button, to keep the forum clean and manageable.

Submission Template:
Code:
Which card are you running?:
Is it overclocked?:
If Yes, how much is it overclocked?:
What is the power limit set to?:
Did you run Superposition 2 times and screenshotted the second result?:
Did you run Steel Nomad 2 times and screenshotted the second result?:
Tag me: @zhrooms
(Optional) Show us a picture of your card in your PC:
 
#3 ·
Thanks for posting this. Great resource!!
 
#5 ·
And so it begins… once more. Hopefully my Galax connect comes through. The HOFs will probably cost an arm and 3 livers though. 2x performance is wild.
 
#6 ·
I didn't catch every word of the keynote...did a pre-order date get announced?

EDIT:
(Also...that compact PCB is going to be sick for watercooling. Fit this thing in an ITX case. Inb4 Asus goes full 'tard and makes it 4 slots and bigger than a 27" monitor)
 
  • Rep+
Reactions: Bill Flores
#7 ·
#10 ·
The performance increase with DLSS 4 is unreal!
DLSS Multi Frame Generation generates up to three additional frames per traditionally rendered frame, working in unison with the complete suite of DLSS technologies to multiply frame rates by up to 8X over traditional brute-force rendering. This massive performance improvement on GeForce RTX 5090 graphics cards unlocks stunning 4K 240 FPS fully ray-traced gaming.
This is going to be the worst-selling flagship card, no doubt about it, because few need this much power, especially with them hiking the price by $400. The 5080, at just $1000—literally half the price—sounds so much more appealing. However, I might be one of the few who actually need it, as I'm upgrading from a 34" 21:9 1440p monitor to the new 39" 21:9 4K later this year when they're released, which is about 40% more pixels than 4K 16:9. Driving that resolution with ray tracing, even with DLSS, will definitely struggle in some games. But really, do I need to play ray-traced single-player games at 240 FPS? 120 FPS is already plenty smooth; a 5080 should manage that, but with a 5090, you'd be absolutely sure there'd be no hiccups.
 
#11 · (Edited)
Fun fact, at the time 4090 launched the currency here was so busted relative to USD that the actual MSRP for 5090 is almost the same as it was for 4090 despite price increase - 10299 vs 9699. Actually getting a card at MSRP is another story, but still.

Now where are the waterblocks.

The performance increase with DLSS 4 is unreal!

This is going to be the worst-selling flagship card, no doubt about it, because few need this much power, especially with them hiking the price by $400. The 5080, at just $1000—literally half the price—sounds so much more appealing. However, I might be one of the few who actually need it, as I'm upgrading from a 34" 21:9 1440p monitor to the new 39" 21:9 4K later this year when they're released, which is about 40% more pixels than 4K 16:9. Driving that resolution with ray tracing, even with DLSS, will definitely struggle in some games. But really, do I need to play ray-traced single-player games at 240 FPS? 120 FPS is already plenty smooth; a 5080 should manage that, but with a 5090, you'd be absolutely sure there'd be no hiccups.
Yea I am already running either 5120x2160 or 5760x2400 through DLDSR and nothing less than 5090 will suffice for RT games. Even older ones like Control or Metro Exodus are in the 80-90 range, way too slow. They should really update those games to DLSS4 by the way, these are signature titles for RT.

Also waiting for 39-40", 45" way too big for a desktop monitor. No dice for now though.
 
  • Helpful
Reactions: GosuPl
#12 ·
So are we looking at 1.5 times the power (50%) raw performance increase on the 5090 vs the 4090?
There's always so many asterisks on that. Like "with DLSS" or "in ray traced benchmarks" or whatever. We won't know until actual benchmarks come out what apples to apples raster graphics performance is. 4090 was ~30% faster than a 3090 in that task. IIRC the 3090 was advertised as blowing the 2080Ti out only because the 2080Ti sucked hard for ray tracing. (and the 3090 still sucked hard with ray tracing, but they then had to cheat by saying "with DLSS")
 
  • Rep+
Reactions: Bill Flores
#13 ·
Deal me in.
 
#14 ·
2-slot FE design for 600 watts, what kind of voodoo magic did they do to accomplish this? I'll be putting it under water once blocks are available but very impressive assuming temperatures are reasonable. Expensive for sure but probably my last upgrade needed for a while, I'm running 4k OLED 175hz so pretty excited. I'm curious as to the bottleneck on pcie gen 3, don't really want to upgrade from x299 quite yet.
 
#15 ·
I'm curious as to the bottleneck on pcie gen 3, don't really want to upgrade from x299 quite yet.
Don't think PCIe has been a bottleneck for a GPU since 2.0 x16. 100gig networking? Yeah. NVME sequential (bleh)? Yeah. GPU? Nope.
 
#16 ·
2-slot FE design for 600 watts, what kind of voodoo magic did they do to accomplish this? I'll be putting it under water once blocks are available but very impressive assuming temperatures are reasonable. Expensive for sure but probably my last upgrade needed for a while, I'm running 4k OLED 175hz so pretty excited. I'm curious as to the bottleneck on pcie gen 3, don't really want to upgrade from x299 quite yet.
There will definitely be some and this is very game dependent so any tests made on few random games are not going to be indicative at all.

PCI-E bandwidth will be the least of your concerns though. That 10980XE is probably pulling like half of gaming performance vs tuned RPL or 9800X3D and with how CPU demanding games have become in recent years, especially with RT enabled, you will struggle to hit even 60 FPS in a lot of them without FG, let alone anything playable. It is a massive handicap for anything faster than maybe 2080 Ti, running that with 5090 for games is just madness.
 
#17 ·
There will definitely be some and this is very game dependent so any tests made on few random games are not going to be indicative at all.

PCI-E bandwidth will be the least of your concerns though. That 10980XE is probably pulling like half of gaming performance vs tuned RPL or 9800X3D and with how CPU demanding games have become in recent years, especially with RT enabled, you will struggle to hit even 60 FPS in a lot of them without FG, let alone anything playable. It is a massive handicap for anything faster than maybe 2080 Ti, running that with 5090 for games is just madness.
Understand the concern for 10980xe but running at 4k, I don't see more than like 30-40% cpu usage while running max. My fps was basically on par with most new cpus as the GPU was still the bottleneck pegged at 100%.
 
#18 · (Edited)
Official pricing is 10-20% better than I feared. Looks like shipping early paid off.

So are we looking at 1.5 times the power (50%) raw performance increase on the 5090 vs the 4090?
33% more functional units at potentially higher clocks.

I'd expect around 40% more performance in high-resolution raster workloads, and fair bit more than that in RT, especially with upscaling and/or frame gen in the picture (which will have lower overhead on the newer card).

Edit: To be clear here, I'm not talking about 4090 FE vs. FE 5950 at stock, but how they'll be clocked in the hands of typical OCNer. The stock 5090 is probably more limited in clock speed due to it's complete lack of a competitor and the dual-slot cooler it's equipped with, rather than some fundamental clock wall in the hardware, but we'll need to see some people actually get them to be sure.

This is going to be the worst-selling flagship card, no doubt about it, because few need this much power, especially with them hiking the price by $400.
It's almost certainly going to sell out in whatever quantity they decide to order. The fastest of anything is always in demand, and NVIDIA has always been good at inflating demand.

Supply is probably going to be rather limited as well. This far into the AI boom, with the sort of backlog that has been building up, it's unlikely they devoted too much of their wafer allocation to GB202.

The 5080, at just $1000—literally half the price—sounds so much more appealing.
It may be 50% of the price, but it's also only going to be about 60-65% of the performance...the performance per dollar of the 5090 is not hugely worse.

However, It sure looks more appealing than the 4080 did.
 
#19 ·
Time since 4090 released (to 5090 release)
841 days
120 weeks and 1 day
27 months and 18 days
2 years, 3 months and 18 days

$1599 MSRP / 27,5 months = $58/month

Here in Scandinavia, they still retail for $1750 pre-tax, which is $150 above MSRP. You can find them used for 70% of new, which equates to about $1250. That's a 30% loss, or $500, over 27.5 months, meaning a loss of just $18 per month—basically negligible. I'd be surprised if the 4090 retains its value now though, after 50 series are out in february, as it doesn't support DLSS 4 (Multi Frame Generation, aka MFG), this means that even a 5070 Ti might outperform it in titles that support MFG, which improves performance by up to 40% and uses 30% less VRAM, meaning it'd be difficult to see how a 4090 would be worth a penny over $1000, maybe even as little as $800, like half MSRP.

Also as far as I'm aware the 4090 can't run the new monitors at 5120x2160 (4K 21:9) with HDR enabled in 240Hz (with DSC the old DP1.4a can only do that in 3840x2160, not at 5120x2160), so I can't see any scenario where you'd want to get a used 4090 over a new 5080, no warranty, less efficient, obsolete display outputs, not supporting latest DLSS (and likely future DLSS improvements like 4.5 and similar).

Definitely tempted to get a 5090 on launch, value should be good even after 60 series because of the new AI-management processor and so on (new and improved AI capability and performance), shouldn't go out of date in the same fashion as the 4090, as pointed out above, with the new display outputs that makes it even more future proof.
 
#20 ·
Understand the concern for 10980xe but running at 4k, I don't see more than like 30-40% cpu usage while running max. My fps was basically on par with most new cpus as the GPU was still the bottleneck pegged at 100%.
He's not wrong though. Cascade Lake is basically high core count Kaby Lake. You should easily see a 50% FPS uplift by moving to a current gen CPU. Even at 4K


The 10980XE core would be comparable to something around Ryzen 2000 series which is the bottom of the charts. And these charts aren't showing 99th percentile FPS which would be even worse the further back you go in CPU generations. If you were at a 10850K or something it'd probably not make a huge difference at 4K but even that is 2 generations newer than Kaby Lake.

TLDR: I'd seriously be looking to spend half as much upgrading cpu/mobo/ram before dumping $2k on a faster gpu.