Overclock.net - An Overclocking Community - View Single Post - [nVidia] NVIDIA DLSS: Your Questions, Answered

View Single Post
post #21 of (permalink) Old 02-27-2019, 03:24 PM
Asmodian
Robotic Chemist
 
Asmodian's Avatar
 
Join Date: Aug 2009
Location: San Jose, California
Posts: 2,391
Rep: 177 (Unique: 117)
Quote: Originally Posted by white owl View Post
I expected the GPU to be able to do this on it's own as you play the game and always be learning and getting better (to some extent).
Then you do not understand neural network training. Once out of the data center no neural networks keep learning. A user device might send data back to a data center to add to a training data set but you need labeled data, you cannot use data for training without labels. Where is the ground truth image coming from if nothing is rendering the game at the full resolution with 8x SSAA? Even if you were to render the ground truth image the math for the back propagation (how the weights are updated during training) makes running the neural network forward look free. This is actually part of the power of them, a trained neural network is relatively cheap computationally, compared to the amount of intelligence potentially built into them.

What I want to see out of DLSS tech is simply AA, I want to render a game at 4K and use the tensor cores to make it look like it has 8x SSAA for less of a hit than 2x SSAA. This also seems realistic, while upscaling a lower resolution to look like 4K with 8xSSAA is not.
Asmodian is offline