The 768MB on G-SYNC was mentioned to be used for "color processing".
That's simplistic, but it's accurate scientifically.
Artifact-free response time acceleration. (keep colors from overshooting)
Artifact-free LCD inversion algorithms. (prevent checkerboard artifacts - demo animation
- complaint example
Artifact-free FRC algorithms. (improve colors further)
Artifact-free strobing (better Y-axis compensated RTC algorithms
My math is that many gigaFLOPS is necessary if you do functions completely by math. To do the curves properly, you need multiple processing operations per pixel (probably a dozen or more), and there's a billion subpixel updates per second in 1920 x 1080 x (3 subpixels) x 144Hz. That'd require 10 GFLOPS for 10 operations per subpixel. Getting 10 gigabytes/sec memory throughput is cheaper than getting 10 GFLOPS, so I'm assuming they're not going to calculate the curves.
Also, isn't G-SYNC is a FPGA and not an ASIC at this time? Which, slightly limits processing power, but increases flexibility? I thought it was since ASIC's aren't flexible enough for G-SYNC requirements. From what I heard from John's talks and others, G-SYNC has a lot of programmable and upgradeable modes including variable-refresh rate modes, fixed-refresh modes, strobe modes (which requires a different Y-axis-compensated overdrive algorithm, for varying pixel freshnesses prior to the full-screen strobe). Other sources mentioned the firmware upgradeability, as well.
LightBoost doesn't even have enough bits of precision in its Y-axis compensated RTC -- the overdrive bands of Y-axis RTC show up on a LightBoost-enabled monitor during the Flicker Test - http://www.testufo.com/flicker
(Height = Full Screen) -- during strobe backlight or 3D mode. I've determined these are rounding errors in Y-axis compensated RTC -- more bits of precision would make the overdrive zones disappear into a fully gradually blended Y-axis compensated RTC. So they'll probably need greater-than-8-bit precision during RTC, maybe even floating point RTC
, and possibly might even have advanced temporal dithering in the RTC to try to blend out everything since 6-bit RTC cannot completely erase the previous refresh.
It's so incredibly involved to get pixels to settle nicely, it's mind boggling. If they're off by just even one 8-bit shade (1/256th), there's 0.3% brightness difference that can be human visible in some colors, you'll still see a faint sharp doubleghost (crosstalk, a problem for 3D too, and 2D motion). But TN's are often 6-bit and their RTC is probably only 6-bit as a result. The TestUFO Eiffel Tower Test
on a strobe-backlight monitor demonstrates the crosstalk effect in 2D mode -- you see a razor-sharp ghost double image (non-blurred) a few pixels to the side of the eiffel tower as it scrolls sideways. Sometimes you also see 6-bit TN "noise" artifacts in the razor-sharp doubleghost (occurs more often on VG278H than my XL2411T, bottom edge of screen -- drag the TestUFO window near bottom edge); this instantly reveals the limited bits of the RTC.
Also, RTC and inversion has often nasty interference effects sometimes, see http://www.testufo.com/inversion
(do this on TN 120Hz monitors)
This is greatly amplified with strobing, and has also created user complaints
with 3D mode. LCD Inversion is well-explained at TechMind
and Lagom.nl Tests
, and I know traditional inversion patterns creates some issues with 3D modes / strobe modes.
(I've done lots of motion testing on LightBoost monitors, and it's amazing how much RTC secrets of a display, can be revealed)
Based on what I now know what's needed to cleanly fix RTC problems during strobing, I'm going to imagine that G-SYNC may need to include:
-- True floating point RTC.
-- True floating point framebuffers
-- Advanced LCD inversion algorithm
-- Advanced temporal dithering algorithm to convert the floating point framebuffer into a 6-bit image for the TN LCD.
We have to prevent interplay artifacts between FRC + inversion + RTC + strobing, there are so many opportunities for those to happen between merely even just any two of those (as we've seen from inversion+strobing at http://www.testufo.com/inversion
when viewing it in LightBoost mode).
To simulate true floating point RTC without too many FLOPS, you'll probably need lots of pre-computed floating point lookup tables (created at monitor bootup) because the number of math operations per pixel becomes mind-boggling for fixing that last 1% of imperfectness in LCD pixel state. It's like going from 99% speed of light to closer to 100% speed of light. The work increases unimaginably gigantically on that last 1%, than going from 0%->99%
These aren't confirmable, but NVIDIA's reply of "color processing" is quite accurate, since that's a plain-english umbrella of keeping the pixel color values correct and as saturated as possible, without temporal/dynamic side effects (from variable refresh, from inversion, from strobing, from RTC).