Originally Posted by TranquilTempest
Linus has some very questionable testing methodology with regards to latency, it would be unwise to draw any conclusions from it. There are really three main problems, the first is that the manual frame counting method is very slow, so he can't really get enough samples to account for the random timing of the input event. The second issue is that it looks like he's just letting the games run uncapped and letting the natural bottlenecks determine the framerate, which may be fluctuating or bottlenecked in a different part of the render chain, this is the reason you need to throw everything there out the window, he's comparing gpu limited scenarios to cpu limited scenarios to display limited scenarios, and not in a consistent way. To specifically isolate freesync vs g-sync you need to use an in game framerate cap with super consistent render times. (like fps_max in CS:GO) This way you know you're measuring the display interface, and not the GPU or game engine. (Third party and driver based framerate caps add input lag, and importantly, they add different amounts of input lag depending on where they limit framerate)
Then there's the fact they're comparing different monitors, which may very well have unaccounted for differences.
To get any kind of confidence in their results, they'd need to benchmark the latency of both systems using the same monitor with vsync off(preferably a CRT), then get baseline vsync off fixed refresh rate input lag numbers for each display(to make sure they're the same distance from the CRT in that scenario), and then finally test gsync vs freesync.
Also, their latency numbers just look high in general, they should be about half what was graphed, given the stated scenarios.
I agree that Linus either has very questionable testing methodology or simply is not giving enough information to understand his tests.
I have seen various of his videos and most of them I would not trust as benchmarks of performance.
Now I understand that Youtube videos have a certain format you need to keep in order to attract a wide variety of crowds (and since that is your revenue, its understandable), but its is not a well performed review, and people will take his testing at face value due to follower numbers spreading miss information all along.
One example was when linus compared Windows 7 to windows 8 to windows 10 and claimed that boot times for Windows 7 were much longer than in windows 10.
In my case, my windows 7 booted faster than any of the windows he was testing, so... either its wrong, or I am missing information.
In general, that also means that the tests are not possible to replicate to see if the reviewer is being honest at all or simply its a paid commercial.
When that happens, I stop using that information as valid.