New Posts  All Forums:

Posts by TranquilTempest

It will be a while before we see a performance benefit to DDR4, as there's expected to be a latency penalty over DDR3 at the same clocks.
Single thread performance will be very close to normal haswell at the same clock, and you have 40 lanes of PCIe 3 supplied by the CPU, why are you complaining about extra pcie2 lanes on the PCH?
I can see them being useful for streamers that want a better compression ratio, I'd certainly buy a 6 or 8 core Haswell if I was spending more than $500 on my GPU.
4770 is already faster than a 9590 in most scenarios, and with 8 core Haswell-E, AMD is unlikely to retain a performance advantage anywhere. I expect an 8 core haswell to be about double the performance of a 9590, but not across the board. In some cases, it will be better than double the performance(FPU heavy stuff), in some cases it will be between 1x and 2x the performance of a 9590(Integer heavy stuff, like 7zip).The big question is price. Yes, it's going to be...
I don't think the journalist would be lying, but it doesn't sound like there was much time to dig into the workings and capabilities of the demo. There may be some confusion about what the demo was intended to demonstrate. No matter how the demo is working, it looks identical to just enabling v-sync on a fixed 50hz monitor.
At CES, AMD had one laptop displaying the windmill demo at 30fps@60hz v-sync, and a second laptop displaying 50fps@50hz v-sync, to simulate the animation smoothness of g-sync or free-sync at 50fps. If you don't believe me, watch the slow motion video anandtech took of the demo. AMD didn't demonstrate refresh rate changing in response to variable framerate, and they didn't demonstrate the display waiting longer for the next frame without being told how long to wait...
I think a framerate cap implemented in the game engine would have less latency than driver or GPU hardware frame pacing, because the GPU would be picking the frame up as soon as the CPU is done working on the frame, instead of the frame sitting around idle until the driver is ready to grab another frame.
eDP is in zero desktop monitors, and AMD hasn't actually demonstrated eDP adapting to changing framerates on a frame by frame basis, they just showed it running at a static 50hz at CES. If there have been updates, I'm all ears.
ASICs take millions of dollars up front and a lot of time to develop, but are relatively inexpensive per chip in high volumes. FPGA can be programmed with whatever logic you want with very little initial development cost or time, but they are much more expensive per chip. Generally FPGAs are used for prototyping and low volume niche stuff. Nvidia used them in the initial g-sync release because early adopters are willing to pay a premium, and there's nobody else on the...
Shipping freesync monitors in 2015 is possible, but may be too optimistic. It all depends on when the big display makers started working on a new ASIC. They won't go the FPGA route like Nvidia because of the risk of Nvidia killing their profit margins with an ASIC implementation of G-Sync. An ASIC takes a LONG TIME to develop and validate, and even longer to get into a shipping product. 2 years from the start of development is pretty normal, and if they waited for a...
New Posts  All Forums: