Originally Posted by GorillaSceptre
Guys, come on..
It's been stated numerous times, and by the devs themselves that the amount of Async in this games is negligible. It's pointless saying "But, but, i thought Nvidia was doomed.
The only reason AMD has improved a bit is due to reduced driver overhead, it has nothing to do with Async. At this point no game uses enough Async to show off AMD's parallelism. When games do start using it (and they will, quite a few have already confirmed they are) Nvidias hardware will
take a hit, it's not a question of if, but when.
This is a wholly illogical viewpoint. If asynchronous compute is being minimally utilized in these DX12 benchmarks, then we literally have nothing demonstrating its worth whatsoever. The only reason async compute became such a hot topic im the first place is because AOTS was supposedly using it and giving AMD the edge. If that is not the case, then you can't determine what impact async compute will have, if any. As such, it remains a simple buzzword(s) at this point; more of an "if", not a "when". Or perhaps more accurately, it remains a "how much".
Furthermore, Fable Legends was hyped up to be using async shaders before the benchmarks; only afterwards when the reference 980 Ti came out on top (quite heavily in certain compute sections) did the whole "only using 5% async" revelation happen.
Which begs the important question: What makes anyone think the other upcoming async compute using games will be significantly different? Developers haven't given concrete numbers on to what degree it's going to be used nor how much it will actually benefit different GPUs. The only demonstrations of async compute out there are both wholly uneventful and consistently minimal in their usage (the latter detail not even being initially known or communicated). Why should the other games be any different?
But it honestly doesn't really matter, by the time those "next gen" games drop, people would of more than enjoyed their 980 TI's. The only people this will affect in the future are those who don't upgrade often.
The only time you could say AMD has an advantage with Async, is if Pascal also has gimped hardware scheduling, or if you're planning to keep your GPU for a couple of years. At this point Nvidia has already "won", most people have already bought 970's and 80's, AMD's superior architecture means nothing right now because no one is using it.
You're correct in that it doesn't matter to current products or resulting revenue, but incorrect (or rather baseless/extremely premature) in your assumption that AMD's architecture is "superior". Like I and anyone making excuses for current DX12 numbers both say, there isn't a single demonstration of async compute's worth or GCN's resulting superiority at the moment.
A superior architecture could be defined in many ways. Perf/transistor, Perf/manufacturing cost, Perf/watt, etc. for the target market are examples. GCN demonstrates superiority through none of them. Only massive deficits really. I mean honestly, take a look at the Fury X vs 980 Ti.
The Fury X (Fiji XT) has ~15-20% more functional transistors (considering the 980 Ti's cut-down) and a CLC/HBM helping it, yet gets beaten OC to OC by the latter by 20-25% in performance even in these DX12 benchmarks that fix AMD's overhead and still use some amount of async compute with no greater power efficiency.
Even if async compute helps Fiji by 20-30%, it won't overtake GM200 in the above metrics but merely get close. And async compute's impact is still a big, undemonstrated question mark at this point. Deciding GCN is superior based on nothing but conjecture (about one microarchitectural decision) is pretty foolish; it's actually quite massively behind by all/most demonstrated metrics and has a looot of ground to cover from undocumented gains before it can even match Maxwell's all-around efficiency, let alone demonstrate any superiority.
Given its contemporary inferiority and lack of suitability to existing software, GCN's design decisions are much more arguably inferior to Maxwell's. The supposed benefit of async compute takes up transistors that could have been put elsewhere and haven't even remotely demonstrated their value yet. Even without asynchronous compute and regardless of API, Maxwell has demonstrated excellent transistor efficiency for its target market. Maybe it's an all-around superior design as a result; that's certainly the narrative actually being shown thus far.
Don't jump to baseless conclusions. Literally nothing
so far supports said conclusion, after all, remember?
There's no point downplaying Async just because you're on team green
, it's going to be a huge deal, and will benefit both vendors in the future. AMD might just benefit more in the short term.
Correction: There's no logical point in hyping up a feature that literally doesn't have a single positive demonstration just because of speculation; the source of which (AOTS) has been discredited due to minimal async usage thereby rendering the whole demonstration a false positive. Judging things according to actual, empirical facts with reservations about the future is an objective standpoint. I'm "downplaying" hype and adamant opinions based on nothing (except AOTS's false positive), not making an opposing adamant claim about asynchronous compute.
Basically, I'm saying wait and see. So far, there's nothing to see. This is a fact, so how does recognition of it make anyone on "team green"? Adamantly insisting async compute is worthless based on nothing would be biased. So too would adamantly insisting async compute makes GCN superior based on nothing. Recognizing reality with the understanding that the future may
change, but to an unknown and unproven degree is just being objective. Being skeptical and critical as a result until actually being shown a shred of evidence otherwise is also objective.Edited by Serandur - 11/4/15 at 6:43am