Overclock.net banner
1 - 11 of 191 Posts
Quote:
Originally Posted by philhalo66 View Post

I admit i don't know anything about AMD cards anymore but it seems like a dirty move to disable features.
If it were Nvidia, you would be supporting G-Sync. Compatibility is exclusive, but the standard is uniform. Same here. Fewer gpus supported, but less variance between support levels. I would prefer AMD to highlight it better. There is a clear benefit to weak laptop cpus and super desktop gpus, however there is far less distinguishable benefit to everything in between.
PS: For instance in this test, triple-core laptop processors @2.0GHz would potentially be faster than quad-core desktop processors @4.4 GHz! Yet again, the tool is in the wrong hands. AMD doesn't have a bone to contend in the performance laptop segment, nor in the enthusiast gpu segment.
 
Quote:
Originally Posted by budgetgamer120 View Post

These test are done with the drivers that disable asynch.
In that case I can report Return of the Tomb Raider as well, since it avoids complicated asynchronous shader stuff too.

Kidding:
Quote:
As in AotS, the Glacier engine uses DirectX 12 to reduce the API overheads and Async Compute to accelerate the GPU limit. Unfortunately, the function in Hitman can not switch off separately.
 
Quote:
Originally Posted by budgetgamer120 View Post

Did you read the post in the news?

I'm not sure what you are trying to say. Tomb raider doesn't have async as you said. So why post benchmarks of it to prove async does nothing for 7970?
Please compare that with 380. Async still isn't doing much < that is my point.
 
Quote:
Originally Posted by kwee View Post

I know...but it was sell as a new product, and new product should still get support.
Honestly, I think AMD's gpus are not calibrated well enough at the gpu vendor. Nvidia has the cache acting as the memory buffer tempering the memory latency, however I just gained 10% performance just by increasing vram voltage at the same clocks. OEMs should cover such intricacies of gddr5. There is so much one can learn by trial and error that vendors could easily pinpoint, unless they aren't artificially stymying performance for more profit.
 
Quote:
Originally Posted by Asmodian View Post

I think it is only a PR issue, not a real one. By that I mean that it sounds bad but doesn't really hurt anyone's performance.

Async Compute is overrated, it only doubles the performance with test tools that run "single threaded" graphics and compute pipelines. Normally the graphics pipeline uses most of the GPU's resources by itself, so even if you can run compute in parallel there aren't many idle cores for it to use. Async Compute is simply a way to automatically optimize, if it breaks things disabling it is a low cost fix.

That said, anything that enables an auto-optimize is a good thing, realistically optimizing everything perfectly for every GPU available is impossible so it is better to have async compute functional and working well.

As an aside; I find it very odd that async compute is not part of any DX12 feature level, maybe because of things like these corner cases?
That is because it raises the utilization rate. Performance is a dimensionless scale, not a feature.
It is like tesselation, but uses the compute shader instead of various forms of geometry shaders. One of the futuristic uses of tessellation is antialiasing, however the penalty for old hardware is that rasterizer is not capable enough to deal with the geometry load. Same here: you have to have a substantial shader count that streamlining the flush cycles to amount to something.
 
1 - 11 of 191 Posts