[Techpowerup] NVIDIA DLSS and its Surprising Resolution Limitations - Page 6 - Overclock.net - An Overclocking Community

Forum Jump: 

[Techpowerup] NVIDIA DLSS and its Surprising Resolution Limitations

Reply
 
Thread Tools
post #51 of 91 (permalink) Old 02-17-2019, 05:43 PM
New to Overclock.net
 
dantoddd's Avatar
 
Join Date: Sep 2009
Posts: 1,470
Rep: 35 (Unique: 30)
Quote: Originally Posted by akromatic View Post
im waiting for GTX2080ti to be release rather then the RTX crap.

im sure nvidia would have tons of RTX chips with faulty tensor cores given their "escapes" and they can make a GTX version of their 2080ti by lasering off the tensor core ie the 1660ti or what ever they are calling it
they wont do that just now. maybe towards the end of the life cycle but not now.

CPU
(4 items)
CPU
i7 8700k
GPU
RTX 2080Ti
RAM
16GB DDR4
Operating System
Windows 10 Pro
▲ hide details ▲
dantoddd is offline  
Sponsored Links
Advertisement
 
post #52 of 91 (permalink) Old 02-17-2019, 08:54 PM
 
The Robot's Avatar
 
Join Date: Mar 2013
Posts: 2,312
Rep: 129 (Unique: 81)
Quote: Originally Posted by dantoddd View Post
they wont do that just now. maybe towards the end of the life cycle but not now.
Yeah, if VII was trouncing the 2080Ti we wouldn't even need to ask, and it would be $700.

Main
(17 items)
Nintendo DS
(8 items)
CPU
6700K
Motherboard
Gigabyte Z170X-Gaming 3
GPU
MSI GTX 1080 Gaming X
RAM
G.Skill Ripjaws V 16GB 3000
Hard Drive
Samsung 850 Evo 500GB
Hard Drive
WD Blue 3TB
Power Supply
EVGA 650 G2
Cooling
Noctua NH-D15S
Cooling
Nanoxia Deep Silence 140mm
Cooling
Nanoxia Deep Silence 120mm
Case
Corsair 400Q
Operating System
Windows 10 Enterprise
Monitor
ViewSonic XG2703-GS 1440p
Keyboard
Leopold FC750 (MX Brown)
Mouse
Logitech Performance Mouse MX
Audio
Mayflower Objective2 + ODAC Rev. B Combo
Audio
Audio-Technica ATH-A990Z
CPU
ARM946E-S 67.028 MHz
CPU
ARM7TDMI 33.514 MHz
RAM
4 MB
Hard Drive
256 kB
Power Supply
850 mAh
Operating System
DS OS
Monitor
3" 256×192 18-bit
Monitor
3" 256×192 18-bit
▲ hide details ▲
The Robot is offline  
post #53 of 91 (permalink) Old 02-18-2019, 05:14 AM
Smug, Jaded, Enervated.
 
GHADthc's Avatar
 
Join Date: Aug 2012
Location: Terra Australis
Posts: 1,269
Rep: 101 (Unique: 66)
https://www.youtube.com/watch?time_c...&v=3DOGA2_GETQ Not sure if anyone has posted this yet?

DLSS is garbage as of the moment, just like RTX...no surprise there really, just more Nvidia smoke and mirrors, like PhysX, Hairworks etc...
GHADthc is offline  
Sponsored Links
Advertisement
 
post #54 of 91 (permalink) Old 02-18-2019, 07:37 AM
New to Overclock.net
 
DNMock's Avatar
 
Join Date: Jul 2014
Location: Dallas
Posts: 3,482
Rep: 171 (Unique: 125)
Seems like the tensor cores are the bottleneck for everything they are utilized on for the 2080ti and I don't necessarily blame Nvidia for it.

Seems like they would know they don't have enough tensor core units on the 2080ti, but ran out of die space and something had to give. Either sacrifice cuda cores and have the 2080ti be a sidegrade to the 1080ti, or use up even more silicon which would have ultimately drove the price up even higher.

The engineers likely wanted more time to make adjustments or to wait for a node shrink to gain access to more die space, but I'm sure investors were having none of that.



At the end of the day the whole tensor core fiasco reminds me of what it would be if a card came out with about half as much ram as it needed.

On that note, I wonder if tensor cores could be placed off die around the chip like little ram modules?


DNMock is offline  
post #55 of 91 (permalink) Old 02-18-2019, 08:18 AM
Looking Ahead
 
TheBlademaster01's Avatar
 
Join Date: Dec 2008
Location: Cluain Dolcáin, Leinster (Ireland)
Posts: 13,043
Rep: 785 (Unique: 536)
Quote: Originally Posted by DNMock View Post
Seems like the tensor cores are the bottleneck for everything they are utilized on for the 2080ti and I don't necessarily blame Nvidia for it.

Seems like they would know they don't have enough tensor core units on the 2080ti, but ran out of die space and something had to give. Either sacrifice cuda cores and have the 2080ti be a sidegrade to the 1080ti, or use up even more silicon which would have ultimately drove the price up even higher.

The engineers likely wanted more time to make adjustments or to wait for a node shrink to gain access to more die space, but I'm sure investors were having none of that.



At the end of the day the whole tensor core fiasco reminds me of what it would be if a card came out with about half as much ram as it needed.

On that note, I wonder if tensor cores could be placed off die around the chip like little ram modules?
They could (I'd say it would make more sense if placed on an interposer like HBM rather than GDDR though), but it would be slow. They would have to build a bus to connect the die to the tensor core ICs. Pros would be that the throughput could be really high and the complexity of both the GPU and Tensor core dies would go down (also power density). On the downside, latency would be high and also overall power consumption would increase.

 



TheBlademaster01 is offline  
post #56 of 91 (permalink) Old 02-18-2019, 08:27 AM
New to Overclock.net
 
umeng2002's Avatar
 
Join Date: Jul 2010
Location: Florida
Posts: 3,133
Rep: 170 (Unique: 106)
Quote: Originally Posted by ToTheSun! View Post
Their wording, I bet, was intentionally ambiguous so as to incentivize people to download their software without committing to a solution that would further complicate the adoption of Turing's new features.
This is the most likely answer.

nVidia's™ marketing department would make the most evil despot envious.

CPU
AMD Ryzen 2700X
Motherboard
Asus Prime X470-Pro
GPU
EVGA GeForce RTX 2070 XC Ultra
RAM
TeamGroup T-Force 16 GB (2x8) Pro Dark (B-die TDPGD416G3200HC14ADC01)
Hard Drive
Samsung 840 EVO 250 GB
Power Supply
Seasonic Focus Plus Platinum SSR-750PX
Cooling
Corsair H80i (not V2 or GT)
Monitor
LG 34UC80-B
Keyboard
Logitech G413
Mouse
Logitech G503 RGB
Audio
Creative SoundBlaster Z (OEM)
▲ hide details ▲
umeng2002 is offline  
post #57 of 91 (permalink) Old 02-18-2019, 08:43 AM
New to Overclock.net
 
umeng2002's Avatar
 
Join Date: Jul 2010
Location: Florida
Posts: 3,133
Rep: 170 (Unique: 106)
DLSS will the 3.5GB of Turing.

nVidia™ marketing never mentioned anything of it's limitations until 6 months after the hardware launched... They didn't even warn you with ambiguous marketing qualifiers... They just droned on like it's magic instead of a potential frame time bottle neck and only used at certain resolutions and with certain quality settings on (ray-tracing). I think I said this when Turing launched: It's a dedicated GameWorks™ GPU. You pay $1300 for the luxury of HairWorks™.

Ray-tracing at native resolution over 1080p won't make sense for another generation or two of GPUs.

CPU
AMD Ryzen 2700X
Motherboard
Asus Prime X470-Pro
GPU
EVGA GeForce RTX 2070 XC Ultra
RAM
TeamGroup T-Force 16 GB (2x8) Pro Dark (B-die TDPGD416G3200HC14ADC01)
Hard Drive
Samsung 840 EVO 250 GB
Power Supply
Seasonic Focus Plus Platinum SSR-750PX
Cooling
Corsair H80i (not V2 or GT)
Monitor
LG 34UC80-B
Keyboard
Logitech G413
Mouse
Logitech G503 RGB
Audio
Creative SoundBlaster Z (OEM)
▲ hide details ▲
umeng2002 is offline  
post #58 of 91 (permalink) Old 02-18-2019, 11:21 AM
Waiting for 7nm EUV
 
tpi2007's Avatar
 
Join Date: Nov 2010
Posts: 11,382
Rep: 894 (Unique: 503)
Quote: Originally Posted by GHADthc View Post
https://www.youtube.com/watch?time_c...&v=3DOGA2_GETQ Not sure if anyone has posted this yet?

DLSS is garbage as of the moment, just like RTX...no surprise there really, just more Nvidia smoke and mirrors, like PhysX, Hairworks etc...

Thanks for the link. So there you have it guys, it's proven in a real game now, you don't even need 1800p + TAA Upscaled to compare to 4K DLSS, 1685p + TAA Upscaled will not only perform exactly the same, but delivers much better visuals than 4K DLSS. And that's not to mention all of the other limitations that DLSS has.


Click on the picture to enlarge:

Click image for larger version

Name:	1685p Upscaled vs 4K DLSS in BFV - Hardware Unboxed, 18 Feb 2019.png
Views:	48
Size:	4.41 MB
ID:	254126

This is an embarrassment for Nvidia.




Good thing that BF V's TAA implementation is quite good, which proves that if you invest in that, you have what you need to work with.



Last edited by tpi2007; 02-18-2019 at 11:28 AM.
tpi2007 is offline  
post #59 of 91 (permalink) Old 02-18-2019, 11:40 AM
Zen
 
Kpjoslee's Avatar
 
Join Date: Jan 2013
Location: Somewhere in US.
Posts: 1,028
Rep: 51 (Unique: 33)
Quote: Originally Posted by tpi2007 View Post
Thanks for the link. So there you have it guys, it's proven in a real game now, you don't even need 1800p + TAA Upscaled to compare to 4K DLSS, 1685p + TAA Upscaled will not only perform exactly the same, but delivers much better visuals than 4K DLSS. And that's not to mention all of the other limitations that DLSS has.


Click on the picture to enlarge:

Attachment 254126

This is an embarrassment for Nvidia.




Good thing that BF V's TAA implementation is quite good, which proves that if you invest in that, you have what you need to work with.
I guess the question would be how long it would take to get DLSS to the level of visual quality as promised. I think fast paced, with multiple destructable environment with various point of view FPS game like Battlefield would be one of most difficult and time consuming to learn. While games like FFXV is somewhat easier.

My home PC
(15 items)
CPU
AMD Threadripper 1950x
Motherboard
Gigabyte Aorus X399 Gaming 7
GPU
EVGA Geforce RTX 2080 Ti XC Ultra
RAM
G.Skill DDR4 3600 CL16
Hard Drive
Samsung Evo 840 500GB
Hard Drive
Samsung 960 Pro 500GB
Power Supply
EVGA SuperNova G2 1300W
Cooling
Noctua NH-U14S TR4
Case
Corsair Carbide Air 540
Operating System
Windows 10 Pro
Monitor
Dell U2711
Monitor
Samsung 55" 4k
Keyboard
Corsair K70
Mouse
Logitech G502
Audio
Denon AVR-X3300W
▲ hide details ▲
Kpjoslee is offline  
post #60 of 91 (permalink) Old 02-18-2019, 02:37 PM
Graphics Junkie
 
UltraMega's Avatar
 
Join Date: Feb 2017
Location: USA
Posts: 1,302
Rep: 31 (Unique: 27)
Quote: Originally Posted by DNMock View Post
Seems like the tensor cores are the bottleneck for everything they are utilized on for the 2080ti and I don't necessarily blame Nvidia for it.

Seems like they would know they don't have enough tensor core units on the 2080ti, but ran out of die space and something had to give. Either sacrifice cuda cores and have the 2080ti be a sidegrade to the 1080ti, or use up even more silicon which would have ultimately drove the price up even higher.

The engineers likely wanted more time to make adjustments or to wait for a node shrink to gain access to more die space, but I'm sure investors were having none of that.



At the end of the day the whole tensor core fiasco reminds me of what it would be if a card came out with about half as much ram as it needed.

On that note, I wonder if tensor cores could be placed off die around the chip like little ram modules?
It would be kinda neat if they would make RTX something you could do with SLI in the same way you can with Physix and maybe release some tensor only cards that only do the RTX stuff (and be a lot cheaper) so people who have a good GPU already and an extra PCIe slot wouldn't need to get a $1200 GPU to do it. RTX features in all honesty probably could afford to exist as a whole seperate PCB for a few generations until it could be properly integrated into standard GPU die space without making the cost so high. Nvidia could market it even more so as a high end feature that way, and they could have done it to an extent where it was actually good enough. It also would have given gamers something to upgrade to without them having to compete with the excess of 1000 series inventory they've had issues with and avoided them having to release cards that have only a minor die shrinkage compared to the last gen.

Quote: Originally Posted by tpi2007 View Post
Thanks for the link. So there you have it guys, it's proven in a real game now, you don't even need 1800p + TAA Upscaled to compare to 4K DLSS, 1685p + TAA Upscaled will not only perform exactly the same, but delivers much better visuals than 4K DLSS. And that's not to mention all of the other limitations that DLSS has.


Click on the picture to enlarge:

Attachment 254126

This is an embarrassment for Nvidia.




Good thing that BF V's TAA implementation is quite good, which proves that if you invest in that, you have what you need to work with.
Wow this is hilarious. DLSS is completely useless.

"DLSS provides much worse than image quality than upscaling from a resolution that provides equivalent performance."

i7 7700kK @4.2ghz
16GB DDR4 3200mhz
GeForce 1080 Ti

Last edited by UltraMega; 02-18-2019 at 02:53 PM.
UltraMega is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off