Originally Posted by
lPizzal
TXAA does some fun hybrid trickery. They get MSAA information and handle edges that way, get temporal information of the previous frame to AA shaders and to smooth out edges even more than MSAA already does to prevent crawling lines even more.
It is the best AA solution for removing crawling lines, but it juggles multiple inputs and interpolates quite hard. To hide that it blurs a lot. It's very taxing and scarifices kinda every best part of each individual technique and combines all of the cons, just to remove crawling lines. Although it IS better than SMAA, as it DOES introduce more information.
This AA algorhytm was very ambitios, trying to combine the best of both worlds, but ended up creating a blurry mess.
The sharpness of more samples is lost due to hard temporal blurring and the quickness of fast blur methods is lost due to sampling from msaa.
SMAA is here and there, I recall Crysis 3 and some other games. You can use an SMAA injector to force a game to use that alogrhytm. Just google game an SMAA injector and you may or may not find a program, that adds this feature. Some developers don't bother as the difference between a properly implemented and well chosen FXAA is small compared to SMAA, many games argue against that and just say it's lazy design, as although SMAA does have more dials and settings, that you have to customize for your game, it is objectivly better and smarter than FXAA. Inspector gives you access to NVIDIAs Driver's hidden front end, where you can tweak tons of stuff. They have a lot of driver side AA implementations with different tweaks here and there. Just google all that is possible. You can do shinanigans like 32xMsaa, where you just puke tons of samples into the scene and get diminishing results of a better image.
First of all, FPS is never fixed. If it were we would have solved 90% of problems in realtime computing
FPS is always varaible. Sometimes games are faster, sometimes slower.
What GSYNC ON does, is make the Refreshrate of the monitor the same as the current fps. This solves problems, where FPS are not a common divisor of the refresh rate and all problems that gives you like tearing. This can be solved by locking the fps top a common divisor. If you cannot fill 60fps anymore, you can drop down to 30 and display a frame every other monitor refresh. This sucks though and sutters like crazy.
As for the nvidia claim:
It makes sense, that if you game at 1080 above 60fps, that you can get sometimes 4k on low demand scenes and thus improve the Image quality, when the scene is not demanding.
I argue against that.
The jumping between two qualities is immersion breaking for me. The counter argument to mine is, that during fight scenes you don't noctice the drop, since there is so much action. I would retort and say a consitant experience is more important, than switching back and forth, as such a process is not perfect.
The opinion on this is subjective. Nvidia obviously has a bias in this, to praise their technology. What you think is best depends in the on you.
My opinion stays, consitant is better than variable for immersion reasons.