Originally Posted by lolfail9001
Joke's on you: Pascal turned into quad core CPU in this regard
Async shaders != Async compute. In fact, they are 2 completely different things, one being AMD's marketing term for SME, another one being actual requirement of Dx12 spec.
Also, believe it or not, the async compute benefit is measurable: http://www.computerbase.de/2016-05/geforce-gtx-1080-test/11/
First graph, bless Fury X's async compute support that leads to drops in performance.
And yes, evidently it does get a boost by usage of it.
Do you even know what you're talking about?
We can argue semantics all you like, but anyone with common sense knows the purpose of Async is to improve the performance/efficiancy of games.. Otherwise there's literally no point in developers putting any effort into it.
The point of it is so devs can move their engines more and more onto the compute side, and then the hardware can do graphics + compute in parallel, Pascal cannot do graphics + compute at the same time.. So what's the point of Nvidias "version" other than PR? Trying to put it simply- Pascal just won't attempt to context switch the same way Maxwell does when a flag tells it to do so, which means it won't lose
performance while running it, and may get a tiny boost (still with margin of error) by using a brute force approach.
Also for what feels like the thousandth time, AOTS hardly utilizes Async at all, no game at the moment really does.. The only one that does a noticeable amount (still not even 10%) is Hitman.
Take a look at Mahigans thread/posts, you'll get a far better idea of what's going on than i could explain.Edited by GorillaSceptre - 6/2/16 at 2:15am