Originally Posted by tpi2007
This has all the hallmarks of a niche solution that will be dropped in a few years. It sounds like SLI actually, it needs profiles provided by Nvidia to work, it doesn't support all games and even in the ones that are supported, your mileage may vary. In this case you may not be able to enable it at all depending on the specific RTX card you're using and the resolution you want to play at.
I still want to see the games that support 4K DLSS tested at 1800p + TAA + Upscaling for comparison. Add that in and DLSS becomes even more of a niche feature. 1800p + TAA + Upscaling is a much more universal solution than DLSS will ever be. And you certainly don't need profiles downloaded through GFE to make it work.
The Tensor cores' main job is as a ray tracing denoiser, DLSS is just a way for Nvidia to get more people to register on GFE with their e-mail, but considering the limited horsepower of the combined RT + Tensor solution I wonder if all that die space was used by raster cores if the game devs couldn't have made the ray tracing enabled games look equally good by having more horsepower dedicated to reflections, even when it means duplicating renderings, but it still means that it's general raster hardware doing it, meaning that it can be used elsewhere on the game when needed, it's much more versatile. And from what we are seeing with the new Metro game, making things more realistic doesn't always work in a way that makes a game better gameplay-wise.
Also, they could use Voxel Global Illumination (VXGI) to achieve some of the effects, that is a tech that Nvidia introduced with Maxwell in 2014, so we've got three Nvidia consumer archs capable of doing it:
Voxels can be used by developers and that's the thing, not many do, I only know one Minecraft mod does it but what modern games use it? Close to none otherwise :/
Where as "RTX features"... Nvidia is dumping money and people onto big developers to implement it.
DLSS Metro on TPU... 1440p native looks better than 1440p+DLSS=scaled to 4k, yeah... seems DLSS is worse than some cheapo bilinear scaling they probably used to make the resolutions to match on the comparison images. Ultimately this is what will always happen with any AA, 1440p+AA looks worse/more blurry than native untouched 1440p. But of course some people will prefer a blurry 1440p+AA/DLSS and try calling it 4k.
DLSS is definitely trained per resolution and setting enabled, such as they will train DLSS+RT ON but not DLSS without RT for some resolution. Ultimately DLSS is limited to Nvidia's whim of what they train for what games they support, oh no a game is partnered with AMD... no DLSS for you!
RT performance... poor, what else unfortunately, they need to offer more settings to lower the quality of it so that the effect is not as detailed but still present. Or just make Voxels, please.
The game... I might finally play the Metro series and see what all the fuss is about. To me at the time it seemed like a STALKER clone and made by same people with similar styling of post nuclear/post apocalyptic. It certainly didn't have a reputation for running fast though.
WannaBeOCer: NV and AMD too probably has selective quality made easier at least by NV recently where developers can define how much quality they want per "pixel"/place in the 3D scene. Aka adaptive quality. Both companies have ditches precise rendering long ago and employ endless graphical optimizations/cheats that lower image quality in favor of performance. Some of the old optimizations can still be tuned via driver settings but these newer ones probably won't be since they also depend on developers to implement them and are not enabled automatically for everything.
I'm certainly not a fan of optimizations and cheats that blur the image, I don't mind "lower quality" with less detail as long as edges stay sharp, say lower texture resolution while edges between objects are unaffected.