Is Ashes of the Singularity biased?
Ashes of the Singularity is the first DX12 game on the market, and the performance delta between AMD and Nvidia is going to court controversy from fans of both companies. We won’t know if its performance results are typical until we see more games in market. But is the game intrinsically biased to favor AMD? I think not — for multiple interlocking reasons.
First, there’s the fact that Oxide shares its engine source code with both AMD and Nvidia and has invited both companies to both see and suggest changes for most of the time Ashes has been in development. The company’s Reviewer’s Guide includes the following:
[W]e have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so…
This branch is synchronized directly from our main branch so it’s usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.
Oxide also addresses the question of whether or not it optimizes for specific engines or graphics architectures directly.
Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known “glass jaws” which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.
We reached out to Dan Baker of Oxide regarding the decision to turn asynchronous compute on by default for both companies and were told the following:
“Async compute is enabled by default for all GPUs. We do not want to influence testing results by having different default setting by IHV, we recommend testing both ways, with and without async compute enabled. Oxide will choose the fastest method to default based on what is available to the public at ship time.”
Second, we know that asynchronous compute takes advantages of hardware capabilities AMD has been building into its GPUs for a very long time. The HD 7970 was AMD’s first card with an asynchronous compute engine and it launched in 2012. You could even argue that devoting die space and engineering effort to a feature that wouldn’t be useful for four years was a bad idea, not a good one. AMD has consistently said that some of the benefits of older cards would appear in DX12, and that appears to be what’s happening.
Asynchronous computing is not itself part of the DX12 specification, but it’s one method of implementing a DirectX 12 multi-engine. Multi-engines are explicitly part of the DX12 specification. How these engines are implemented may well impact relative performance between AMD and Nvidia, but they’re one of the advantages to using DX12 as compared with previous APIs.
Third, every bit of independent research on this topic has confirmed that AMD and Nvidia have profoundly different asynchronous compute capabilities. Nvidia’s own slides illustrate this as well. Nvidia cards cannot handle asynchronous workloads the way that AMD’s can, and the differences between how the two cards function when presented with these tasks can’t be bridged with a few quick driver optimizations or code tweaks. Beyond3D forum member and GPU programmer Ext3h has written a guide to the differences between the two platforms — it’s a work-in-progress, but it contains a significant amount of useful information.
Fourth, Nvidia PR has been silent on this topic. Questions about Maxwell and asynchronous compute have been bubbling for months; we’ve requested additional information on several occasions. Nvidia is historically quick to respond to either incorrect information or misunderstandings, often by making highly placed engineers or company personnel available for interview. The company has a well-deserved reputation for being proactive in these matters, but we’ve heard nothing through official channels.
Fifth and finally, we know that AMD GPUs have always had enormous GPU compute capabilities. Those capabilities haven’t always been displayed to their best advantage for a variety of reasons, but they’ve always existed, waiting to be tapped. When Nvidia designed Maxwell, it prioritized rendering performance — there’s a reason why the company’s highest-end Tesla SKUs are still based on Kepler (aka the GTX 780 Ti / Titan Black).
It’s fair to say that the Nitrous Engine’s design runs better on AMD hardware — but there’s no proof that the engine was designed to disadvantage Nvidia hardware, or to prevent Nvidia cards from executing workloads effectively.