Overclock.net banner
Status
Not open for further replies.

[Various] Ashes of the Singularity DX12 Benchmarks

351K views 3K replies 239 participants last post by  mcg75 
#1 ·
A relevant blog post from the game's creators,Oxide: The birth of a new API

PCPerspective

ExtremeTech

EuroGamer

Legit Reviews

Computerbase.de


Thanks to @Mahigan for the insights and legwork!

An Oxide rep responds to address the various discrepancies seen in the benchmark:
Quote:
Originally Posted by Kollock View Post

Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD
wink.gif
As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic
wink.gif
).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it,
wink.gif
)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
 
See less See more
1
#5 ·
Is there a stand alone benchmark or do you have to buy the game?

edit: thanks JacopF
 
#6 ·
up to more than 2x faster compared to dx11, 390x. Very nice
 
#10 ·
I am almost suspect of the gains for AMD. They bring up more questions than they answer, at least for myself.

Why are they so massive? Is AMD just that bad at writing drivers for DX 11? Is it intentional? Or does it happen to be that lower level APIs like the particular AMD architecture design? Are the results even valid?
 
#11 ·
Quote:
Originally Posted by PostalTwinkie View Post

I am almost suspect of the gains for AMD. They bring up more questions than they answer, at least for myself.

Why are they so massive? Is AMD just that bad at writing drivers for DX 11? Is it intentional? Or does it happen to be that lower level APIs like the particular AMD architecture design? Are the results even valid?
Well they're all pretty similar across the benchmarks. ExtremeTech had the best write up. They attribute it to AMD having a lot of driver overhead in DX11. I think in part it may also have to do with AMD being a little more integrated into DX12 then Nvidia with Mantle code.
 
#12 ·
Quote:
Quite frankly, the notion of DX12 running slower than DX11 in some scenarios isn't what we expected to see. Whether it's a game-specific issue, or a driver-related one, you can be sure that Nvidia's engineering team are digging deep into this benchmark now in an effort to figure out what's going on. We'll update with any news.
I would bet money Nvidia's problems are related to their 0.5 GB slow ram on the GTX 970.

It would serve them right for gambling on this idea but we won't know until someone does a test with a 960 and 980.
 
#13 ·
Quote:
Originally Posted by Robenger View Post

Well they're all pretty similar across the benchmarks. ExtremeTech had the best write up. They attribute it to AMD having a lot of driver overhead in DX11. I think in part it may also have to do with AMD being a little more integrated into DX12 then Nvidia with Mantle code.
Well I hope this is a poor example of DX12 then otherwise it seems pretty underwhelming.

Net result is Nvidia gains little over DX11. AMD gains a lot over DX11, but basically comes up to Nvidia's DX11/DX12 level. So great if you have AMD hardware, but still not knock your socks off overall improvement that was hinted at with DX12.

edit: Basically it'd have been nice to see both neck and neck on DX11 with appreciable gains then on DX12.
 
#14 ·
Quote:
Originally Posted by DampMonkey View Post

Those gains from the 390x are insane
Those 390x dx11 results arent comparable of modern dx11 games. When have you seen a 390x being 30fps slower than a 980 in modern games? None. The dx12 results show how both gpus are currently performing in dx11 games. I dont know whats up with their amd dx11 results. We didnt get much information from this benchmark than what we already know. 390x and 980 are on par-we already knew this. Did dx 12 help AMD cpus anything worthwhile? Nope. Better thread usage was probably the #1 concern here and it doesnt appear to be that in this case.

We did get an fps boost, but also got decreases as well. So everything isnt fine and dandy.
 
#15 ·
What I gather from this, most of AMD's performance issues have been software related. They could not, for what ever reason, get their drivers and DX to work well together. But, DX12 fixes that and gives them massive gains. While, Nvidia seems to have smaller gains.

That or Nvidia is getting smaller gains from poor drivers for DX12?

Either way, they are pretty much neck and neck now. Might see some price wars coming.
 
#17 ·
And today we have some pudding for everyone to play in. Over here we have the red pudding, and over there we have the green pudding. Everyone is allowed to play but please don't throw pudding across the table at the other players.
 
#19 ·
Quote:
Originally Posted by mutantmagnet View Post

I would bet money Nvidia's problems are related to their 0.5 GB slow ram on the GTX 970.

It would serve them right for gambling on this idea but we won't know until someone does a test with a 960 and 980.
Based on what? Because that "slow" 0.5 GB didn't cause performance issues in DX 11 sub 5K resolution.

EDIT:

Also, your theory doesn't explain the 980 Ti seeing marginal gains, while AMD still sees massive gains. This is software, not hardware I am betting. For all we now AMD has been sandbagging DX 11 support for awhile now, to show "massive" gains on DX12 and other APIs that are coming.

Well, maybe "sandbagging" is a bit harsh. Maybe this is the culmination of AMD focusing on newer APIs, and less on the "outgoing" (it really isn't) DX 11. This could all be what dumping money into one thing, and not the other, looks like.
Quote:
Originally Posted by Ganf View Post

And today we have some pudding for everyone to play in. Over here we have the red pudding, and over there we have the green pudding. Everyone is allowed to play but please don't throw pudding across the table at the other players.
Too late! Shots fired!
 
#20 ·
Quote:
Consider Nvidia. One of the fundamental differences between Nvidia and AMD is that Nvidia has a far more hands-on approach to game development. Nvidia often dedicates engineering resources and personnel to improving performance in specific titles. In many cases, this includes embedding engineers on-site, where they work with the developer directly for weeks or months. Features like multi-GPU support, for instance, require specific support from the IHV (Integrated Hardware Vendor). Because DirectX 11 is a high level API that doesn't map cleanly to any single GPU architecture, there's a great deal that Nvidia can do to optimize its performance from within their own drivers. That's even before we get to GameWorks, which licenses GeForce-optimized libraries for direct integration as middleware (GameWorks, as a program, will continue and expand under DirectX 12).

DirectX 12, in contrast, gives the developer far more control over how resources are used and allocated. It offers vastly superior tools for monitoring CPU and GPU workloads, and allows for fine-tuning in ways that were simply impossible under DX11. It also puts Nvidia at a relative disadvantage. For a decade or more, Nvidia has done enormous amounts of work to improve performance in-driver. DirectX 12 makes much of that work obsolete. That doesn't mean Nvidia won't work with developers to improve performance or that the company can't optimize its drivers for DX12, but the very nature of DirectX 12 precludes certain kinds of optimization and requires different techniques.
- From the last page on the ExtremeTech article.
 
#21 ·
Quote:
Originally Posted by CasualCat View Post

Well I hope this is a poor example of DX12 then otherwise it seems pretty underwhelming.

Net result is Nvidia gains little over DX11. AMD gains a lot over DX11, but basically comes up to Nvidia's DX11/DX12 level. So great if you have AMD hardware, but still not knock your socks off overall improvement that was hinted at with DX12.

edit: Basically it'd have been nice to see both neck and neck on DX11 with appreciable gains then on DX12.
Well that's really the point of DX12. It reduces CPU overhead and distributes the CPU workload across multiple threads. At the most basic level it's designed to alleviate any CPU bottlenecks, not improve GPU performance.

I'm sure once we start to see renderers implementing Async compute we will see GPU-side performance improvements.
 
#23 ·
  • Rep+
Reactions: p4inkill3r
#24 ·
#25 ·
Quote:
Originally Posted by Robenger View Post

- From the last page on the ExtremeTech article.
Basically AMD sucks at developing drivers within the DX 11 environment; either by lack of ability or finances to fund it. Where as Nvidia is capable of providing, for whatever reason, the resources to really maximize DX 11.

EDIT:

Not sure anyone should be surprised by this.
 
Status
Not open for further replies.
You have insufficient privileges to reply here.
Top