http://www.eteknix.com/amd-accused-of-disabling-async-compute-on-older-gpus/Since the release of the Radeon Crimson Software Driver 16.9.2, it seems that AMD may have disabled DirectX 12 Asynchronous Compute technology on graphics cards that run first-generation Graphics CoreNext (GCN) architecture, despite the hardware supposedly being able to Aync Compute.
While AMD has blamed developers for disabling Async Compute on GCN 1.0 architecture - 1.1 and above retains Async Compute utilisation - with titles like Total War: Warhammer and Rise of the Tomb Raider, users of the Beyond3D forum - and this angry redditor - have determined that the problem stems from AMD's very own drivers, starting with 16.9.2.
GCN 1.0 is past anyway. Asynchronous shaders take +2048 shaders in order to differentiate on its own.
If it were Nvidia, you would be supporting G-Sync. Compatibility is exclusive, but the standard is uniform. Same here. Fewer gpus supported, but less variance between support levels. I would prefer AMD to highlight it better. There is a clear benefit to weak laptop cpus and super desktop gpus, however there is far less distinguishable benefit to everything in between.
In that case I can report Return of the Tomb Raider as well, since it avoids complicated asynchronous shader stuff too.
As in AotS, the Glacier engine uses DirectX 12 to reduce the API overheads and Async Compute to accelerate the GPU limit. Unfortunately, the function in Hitman can not switch off separately.
Please compare that with 380. Async still isn't doing much < that is my point.
Post #4. Yes.Originally Posted by budgetgamer120
You are wrong when you say it does nothing.
http://m.hardocp.com/article/2016/04/19/dx11_vs_dx12_intel_cpu_scaling_gaming_framerate/3#.WEhPNdITHqA
Honestly, I think AMD's gpus are not calibrated well enough at the gpu vendor. Nvidia has the cache acting as the memory buffer tempering the memory latency, however I just gained 10% performance just by increasing vram voltage at the same clocks. OEMs should cover such intricacies of gddr5. There is so much one can learn by trial and error that vendors could easily pinpoint, unless they aren't artificially stymying performance for more profit.
That is because it raises the utilization rate. Performance is a dimensionless scale, not a feature.Originally Posted by Asmodian
I think it is only a PR issue, not a real one. By that I mean that it sounds bad but doesn't really hurt anyone's performance.
Async Compute is overrated, it only doubles the performance with test tools that run "single threaded" graphics and compute pipelines. Normally the graphics pipeline uses most of the GPU's resources by itself, so even if you can run compute in parallel there aren't many idle cores for it to use. Async Compute is simply a way to automatically optimize, if it breaks things disabling it is a low cost fix.
That said, anything that enables an auto-optimize is a good thing, realistically optimizing everything perfectly for every GPU available is impossible so it is better to have async compute functional and working well.
As an aside; I find it very odd that async compute is not part of any DX12 feature level, maybe because of things like these corner cases?