Overclock.net banner
121 - 140 of 18,210 Posts
Even if Nvidia increased PT or RT core perf by 2x it is not going to be 2x faster in RT/PT games. 2x RT means a part of RT calculations (what ever they doubled) will run faster. Let say that took 2ms, now takes 1ms but the rest is just 30% faster so 14ms > 10ms so the decrease from 16ms to 11ms so instead of 30% faster its 40% faster in RT enabled game.
Yes.

Those graphs where reviewers show the performance hit from turning on RT...the gap on the 5000 series should be lower than on the 4000 series, which will make the proportional increase more than with pure raster titles/settings.

Looks like the raw performance uplift is terrible. It's all about AI DLSS nonsense for the 5K series, which is total trash.
The raw performance uplift is pretty much exactly what it should be given the increase in number of functional units.

You don't magically get a part that is 100% faster by increasing shader and TMU counts by 33% and keeping clock speeds the same (or making them worse by slapping a two slot cooler on a 575W part).
 
The raw performance uplift is pretty much exactly what it should be given the increase in number of functional units.

You don't magically get a part that is 100% faster by increasing shader and TMU counts by 33% and keeping clock speeds the same (or making them worse by slapping a two slot cooler on a 575W part).
Correct but with the price increase you are getting way less price vs performance this generation.
 
Raster performance is really bad.

The boost clock is holding the card down. I think it´s done on purpose for either heat or planing a future update and charge more.

I dont know why everybody is so happy with the price as well. Like if many people are getting the cards at MRSP ;)
Yea the Nvidia Kool-aide seems to be popular right now.

Correct but with the price increase you are getting way less price vs performance this generation.
It's a lot less even at the same price. About half the performance increase compared to the 3090 vs 4090.
 
I bet the non-frame-gen performance uplift of the 5090 over the 4090 will be, bare minimum 30-35 %, but probably considerably more (throttling scenarios aside). Does anyone know when reviews/benchmarks are allowed to be released?
 
The notion that RT/PT is 2x as fast as the 4090 makes no sense at all when looking at this chart where they obviously are cheating again with more inserted frames. The increase would actually be much higher for the PT games if that were true.

View attachment 2690232
That's a valid point. Just watched DF's video.

They showed:

4080 Super - DLSS P + FG X2 = 100%
5080 - DLSS P + FG X4 = 190%

And also that:

5080 - DLSS P X4 = 75%~ higher frames than the same 5080 with DLSS P X2

So that 190% vs 100% above...divide the 190 by 1.75 to see what the 5080 performance would have been at x2 frame gen and you get 108%. Now admittedly they said they had an engineering board and didn't have proper drivers or some such. But seeing 5080 running about 10% faster than the 4080 Super in a path traced game is...odd to say the least.

Edit: Added screen grabs for those who don't want to search the video:
Technology Night Lens flare

Technology Advertising Night Logo Electronic signage
 
The notion that RT/PT is 2x as fast as the 4090 makes no sense at all when looking at this chart where they obviously are cheating again with more inserted frames. The increase would actually be much higher for the PT games if that were true.

View attachment 2690232
I don't doubt for a moment that that actual PT/RT performance has roughly doubled vs. the 4090. However, that doesn't imply anywhere near double the game performance, unless the game is profoundly bottlenecked by RT/PT.

What 2x more RT performance means is that the performance impact we see here...
Image


...will be cut by half, at most.

Every step has overhead and inefficiencies, extra fake frames aren't going to increase performance linearly, because it takes work to create those frames. Massive increases in RT performance aren't going to increase game performance linearly, because there is still a ton of raster work that has to be done and integrated. That's why NVIDIA has the fine print on their slides.

Barring some profound architectural shift, a video card is mostly just the sum of it's most important parts. That's the whole thing that has driven rendering performance for the last 30 years...it's embarrassingly parallel. You add more units and make sure you don't let bottlenecks get out of control and you have a near linear increase in performance. The rest of the advances come from figuring out how to do less work.
 
  • Rep+
Reactions: spin5000
The ROG Astral has a "Power Detector+" feature for the 16 pin connection. Does anyone know if this is exclusive to ASUS or if all 5090s will have this feature?
View attachment 2690242
The newbs will rejoice if it is available to all models.
One has to be a special case to not know how to plug in a connector.
 
How does one get FE card in Canada? I dont want to get these stupid 4 slot coolers.
From past experience with a 4090 FE, they can only be purchased from BestBuy in Canada for retail MSRP. You can get them on Amazon from 3rd party sellers as well if you want to pay a markup. Hopefully NVIDIA opens up more vendor allocations for FE cards.

Speaking of FE, the power connector is angled for this gen. This is a huge plus for the SFF community, no more need for those angled cables. I wasn't planning on upgrading from my 4090 Strix, but this is really tempting now... o_O
 
I don't doubt for a moment that that actual PT/RT performance has roughly doubled vs. the 4090. However, that doesn't imply anywhere near double the game performance, unless the game is profoundly bottlenecked by RT/PT.

What 2x more RT performance means is that the performance impact we see here...
Image


...will be cut by half, at most.

Every step has overhead and inefficiencies, extra fake frames aren't going to increase performance linearly, because it takes work to create those frames. Massive increases in RT performance aren't going to increase game performance linearly, because there is still a ton of raster work that has to be done and integrated. That's why NVIDIA has the fine print on their slides.

Barring some profound architectural shift, a video card is mostly just the sum of it's most important parts. That's the whole thing that has driven rendering performance for the last 30 years...it's embarrassingly parallel. You add more units and make sure you don't let bottlenecks get out of control and you have a near linear increase in performance. The rest of the advances come from figuring out how to do less work.
Bear with me for a minute. Let's take one of those 50% FPS drop scenarios and use artificial FPS numbers for easier maths. With a 4090.

Control Native 4K: 100fps
Control Native 4K + RT: 50fps

Now looking at the 5090, ignore power/heat limits for a second. Pretend we're running an unlocked bios that's not throttling it and we have a good water block on it.
+33% more shaders
+80% memory bandwidth & more cache
+100% rt

Now I don't have numbers for how the increased memory bandwidth will affect performance but the 4090 was definitely memory constrained so there will be an uplift from that. But doing some basic maths:

Native 4K No-RT: 100fps -> 133fps from additional cores
133fps -25% RT hit (assuming "+100% RT") = 99.75fps

Now I know it's not all going to scale perfectly. But "in theory" for unlocked watercooled cards am I wrong in seeing a potential for nearly double performance in this scenario? Even if not double, what about +75% without even getting into mfg?
 
Discussion starter · #133 ·
3DMark Time Spy Extreme
7900X3D + 2080 Ti @ Stock = 7000 Graphics Score
7900X3D + 3090 @ Stock = 9900 Graphics Score (+41.5% over 2080 Ti)
7900X3D + 3090 Ti @ Stock = 10,400 Graphics Score (+48.5% over 2080 Ti and +5.0% over 3090)
7900X3D + 4090 @ Stock = 20,000 Graphics Score (+185% over 2080 Ti, +100% over 3090 and +92% over 3090 Ti)

MSRP, adjusted for inflation
2080 Ti = September, 2018, $999 has changed to $1249
3090 = September, 2020, $1499 has changed to $1815
4090 = October, 2022, $1599 has changed to $1699
5090 = January, 2025, $1999

$1250 for 7000 Graphics Score
$1815 for 9900 Graphics Score (+45% Cost / +41.5% Performance)
$1699 for 20,000 Graphics Score (-6% Cost / +100% Performance)
$1999 for ? Graphics Score (+18% Cost / +?% Performance)

3090 was priced poorly, almost same price and performance uptick, even though the card is more efficient and improved architecturally, you still pay for every % of performance.
4090 was an absolute steal, double the performance while price remained the same, even if it had only been 40% like the 3090 over 2080 Ti, at the same price it would've been a good deal.
5090 can't comment on too much as we don't know the exact performance numbers, but the cost is 18% higher, that we know, it's likely we're going to be looking at a fairly poorly priced card just as the 3090, if we pay 18% but only get 20% higher raster performance, with DLSS4 (MFG) it will in theory double, +100% performance like the 3090 to 4090, but that's only in DLSS4 supported titles, and if it scales properly (optimization and other hardware limitations), worst case scenario it's only 20% faster in raster, best case we're looking at up to 40%? Haven't had time to dig into the benchmarks yet. However, as a person who isn't ******ed and actually use DLSS, the card is easily worth the 18% price hike to me, as I'm going to get that sweet 70-100% fps boost at 5120x2160 with ray tracing up the wazoo. :giggle:
 
Inno3D RTX 5090 iCHILL Frostbite

Link


Electronic device Technology Computer hardware Electronic component Gadget

Technology Electronics Computer hardware Electronic component
 
I think most important thing to look for is what power connectors and power limits each of the cards will have. The FE is going to be heavily limited by its power budget. The 4090 already hits that limit at 3GHz+ depending on the workload. And now you have 33% more cores.

Going to need a 2x power connector model I think. Not sure if anyone will do a 4x 8-pin connector model or if anyone will dare put out a model with official over 600W bios.

edit: forgot card can also pull an additional 75W from the pcie slot. Or 100W if you’re EVGA. So technically will be possible to get 650-675W official cards that operate within spec.
 
I think most important thing to look for is what power connectors and power limits each of the cards will have. The FE is going to be heavily limited by its power budget. The 4090 already hits that limit at 3GHz+ depending on the workload. And now you have 33% more cores.

Going to need a 2x power connector model I think. Not sure if anyone will do a 4x 8-pin connector model or if anyone will dare put out a model with official over 600W bios.

edit: forgot card can also pull an additional 75W from the pcie slot. Or 100W if you’re EVGA. So technically will be possible to get 650-675W official cards that operate within spec.
Yea it would be interesting to know if the 4090 HOF pulls more power from the slot than other 4090's since it came with a 666w bios, or did they just push the limits on the connector.
 
Another big question I have for the RTX 5000 series is whether you can finally use DSR/DLDSR/Scaling along with DSC. Without it, can't use DLDSR on those upcoming 4K Ultrawides at 240Hz even. Would have to drop to 8-bit color or lower refresh rate.

Without DSC on DP 2.1b:

3840x2160 - 10 Bit - HDR = 267Hz Max
5120x2160 - 10 Bit - HDR = 208Hz Max
 
121 - 140 of 18,210 Posts