Originally Posted by caswow
what methodology do devs need to use if they want to make use of nvidias "advanced" architecture? and i mean something really usefull not overtesslation? and what segmentation do you mean? dont you think nvidia will implement more async compute in their next architecture to boost
perf? because i think i know what direction you are heading...
Well, that is the possible concern. As it appears now, Nvidia isn't invested heavily into this Async Compute situation. We don't know what Pascal will be yet, so we can't say what they are going to do. However, it has been argued that developers will have more control over the actual performance of the game, and less control is given to the actual GPU manufacturer. So if Nvidia decides they want to take another approach to it, whatever those options might be (who knows), we could have two very different philosophies. Maybe even more so than now...
Why that could all be a potential concern is that if AMD goes heavy support Async Compute, and Nvida does XYZ SomethingForUs, you now have developers with two very clear paths. Do they have the funding to support full development and optimization for both of those unique paths? Did Nvidia, with their trucks of cash, flat buy out a developer?
If Nvidia and AMD can't make huge impacts with drivers, what happens if two clear paths emerge and a developer takes just one? This isn't even a path of two different APIs going to war, but different paths within a single API.
It leaves an extreme amount of room for developer bias. If DX12 locks out the GPU manufacturer as much as some claim in terms of performance. We think we see heavy bias now in games, I can't imagine what it would look like if a developer didn't give equal treatment, and the left out party couldn't make extreme driver improvements on their own
Originally Posted by HalGameGuru
Any monitor manufacturer can make a freesync supporting monitor, if it has adaptive sync and an AMD GPU can make use of it freesync will work. Freesync is merely the AMD implementation of adaptive sync, which they pushed to have included in the VESA spec rather than push for a bespoke piece of hardware. Anyone can make use of Adaptive Sync, FreeSync is merely what adaptive sync is called when an AMD GPU is making use of it. There is no added cost for a monitor manufacturer to put out a monitor that is DP Spec compliant that will work with an AMD GPU under FreeSync and Intel's future with adaptive sync will only push the tech further and make it more ubiquitous and inexpensive.
TressFX and Mantle stand on their own. And both have media history on their accessibility and the avenues left open to the other manufacturers as to their implementation.
I'm seeing a lot of inductive reasoning going on taken with quite a lot of acceptance, but, AMD's history of putting out spec's and tech's that the industry as a whole can make use of is the bridge too far?
Actually AMD has their own validation requirements and tests they run specific to FreeSync, as Freesync is specific to AMD. What the default DP spec for AdaptiveSync does isn't enough for FreeSync to work as FreeSync is marketed. It requires a hell of a lot of R&D and tuning to get done; Nixeus has commented on this heavily.
So while Intel picking up AdaptiveSync and going with their own VRR will be great, it won't really directly impact FreeSync specifically. As it is an entirely separate product/offering/process.
You have the umbrella of VRR. Under that you have the different offerings.
- In-Sync* (As refereed to here on OCN).