I wonder how nvidia is training their networks. I would think they would train in textures and other assets separately so the network would know how they should look at a lot of angles, etc. But since it's very resolution based, it seems like they're rendering the game at different angles and training on just complete images. That doesn't make sense to me. Also, they should have easy access to depth and edge information; so why can't they also just make a good edge AA option with it?
DLSS is looking extremely half-baked.
Asus Prime X470-Pro
EVGA GeForce RTX 2070 XC Ultra
TeamGroup T-Force 16 GB (2x8) Pro Dark (B-die TDPGD416G3200HC14ADC01)
Samsung 840 EVO 250 GB
Seasonic Focus Plus Platinum SSR-750PX
Corsair H80i (not V2 or GT)
Creative SoundBlaster Z (OEM)
▲ hide details ▲