New Posts  All Forums:

Posts by TranquilTempest

I don't think the journalist would be lying, but it doesn't sound like there was much time to dig into the workings and capabilities of the demo. There may be some confusion about what the demo was intended to demonstrate. No matter how the demo is working, it looks identical to just enabling v-sync on a fixed 50hz monitor.
At CES, AMD had one laptop displaying the windmill demo at 30fps@60hz v-sync, and a second laptop displaying 50fps@50hz v-sync, to simulate the animation smoothness of g-sync or free-sync at 50fps. If you don't believe me, watch the slow motion video anandtech took of the demo. AMD didn't demonstrate refresh rate changing in response to variable framerate, and they didn't demonstrate the display waiting longer for the next frame without being told how long to wait...
I think a framerate cap implemented in the game engine would have less latency than driver or GPU hardware frame pacing, because the GPU would be picking the frame up as soon as the CPU is done working on the frame, instead of the frame sitting around idle until the driver is ready to grab another frame.
eDP is in zero desktop monitors, and AMD hasn't actually demonstrated eDP adapting to changing framerates on a frame by frame basis, they just showed it running at a static 50hz at CES. If there have been updates, I'm all ears.
ASICs take millions of dollars up front and a lot of time to develop, but are relatively inexpensive per chip in high volumes. FPGA can be programmed with whatever logic you want with very little initial development cost or time, but they are much more expensive per chip. Generally FPGAs are used for prototyping and low volume niche stuff. Nvidia used them in the initial g-sync release because early adopters are willing to pay a premium, and there's nobody else on the...
Shipping freesync monitors in 2015 is possible, but may be too optimistic. It all depends on when the big display makers started working on a new ASIC. They won't go the FPGA route like Nvidia because of the risk of Nvidia killing their profit margins with an ASIC implementation of G-Sync. An ASIC takes a LONG TIME to develop and validate, and even longer to get into a shipping product. 2 years from the start of development is pretty normal, and if they waited for a...
The important thing is that there isn't an already implemented open source free solution, and there won't be one any time soon.
If you want low latency, you want the monitor to be waiting on the GPU, and the GPU/driver to be waiting on the CPU/game engine. If your monitor is the bottleneck at 60hz with v-sync, g-sync, or freesync, you add latency between the GPU and monitor, as well as between the CPU and GPU. For example: CPU takes 7 ms to calculate frame then waits 9.6ms before the GPU is ready to start working on that frame. GPU takes 10ms to render frame, then waits with a finished frame...
I expect we will still see some more 28nm maxwell designs in the consumer desktop segment, with 20nm first going to either very high end desktop/compute parts, or to mobile. Sure, mainstream parts will eventually go 20nm, but not until yields improve to the point where it's cheaper per gate than staying at 28nm.
20nm is still more expensive per gate than 28nm, and until that changes you'll only see 20nm used in premium products, where absolute performance is more important than performance per dollar(and where there's a bigger profit margin to work with). Past R&D spending is a sunk cost, it doesn't justify switching to a new node, it just allows it.
New Posts  All Forums: