[Various] Ashes of the Singularity DX12 Benchmarks - Page 122 - Overclock.net - An Overclocking Community

Forum Jump: 

[Various] Ashes of the Singularity DX12 Benchmarks

 
Thread Tools
post #1211 of 2682 (permalink) Old 08-29-2015, 04:41 PM
New to Overclock.net
 
PhantomTaco's Avatar
 
Join Date: Apr 2012
Posts: 1,263
Rep: 88 (Unique: 76)
Quote:
Originally Posted by Kollock View Post

Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

I hate to double post but I wanted to thank you for posting this. It is probably the single most illuminating thing I've read on the subject to date. For the record I don't believe a single engine is ever a good measure of anything. Yes UE4 and it's prior iterations are among the most popularly used engines, but like you said they definitely have had a bias, and I wouldn't call a single benchmark from a single engine alone enough to draw any conclusions, regardless of whose engine it is. Your agreement with AMD is what keeps me questioning, and like I said in the last post I put up, I don't mark it as a sign of foul play or otherwise, merely something that draws questions. What I'd honestly love is a post from an NVIDIA representative to talk more from their perspective on these performance numbers, as well as more input from them as other DX12 enabled games launch (such as Ark) to better explain the choices they made at a hardware level and what their impact is.

PhantomTaco is offline  
Sponsored Links
Advertisement
 
post #1212 of 2682 (permalink) Old 08-29-2015, 04:46 PM
new to OCN?
 
PontiacGTX's Avatar
 
Join Date: Aug 2011
Location: Venezuela
Posts: 26,368
Rep: 1536 (Unique: 924)
Quote:
Originally Posted by Kollock View Post

Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
what about Ashes of singularity`s cpu performance. It is supposed to take advantage of more cores threads(since Star swarm had a good gain with 6 cores) meanwhile here the 83xx CPUs are as weak as they are on directx 11,does this games uses something that in the AMD architecture/platform is weak or this games takes advantage of less than 6 Cores? What about the results of pcper that shows that a 6700k is faster on DX12?
PontiacGTX is offline  
post #1213 of 2682 (permalink) Old 08-29-2015, 04:50 PM
New to Overclock.net
 
Kollock's Avatar
 
Join Date: Aug 2015
Posts: 39
Rep: 85 (Unique: 39)
Quote:
Originally Posted by Forceman View Post

Wait, so all this analysis and conclusions about how async compute is going to make AMD's architecture the better one in the future, and this benchmark doesn't even use async compute on the Nvidia side?

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.
Kollock is offline  
Sponsored Links
Advertisement
 
post #1214 of 2682 (permalink) Old 08-29-2015, 04:58 PM
New to Overclock.net
 
Kuivamaa's Avatar
 
Join Date: Feb 2013
Location: Finland
Posts: 4,593
Rep: 218 (Unique: 113)
So basically Mahigan was spot on?

Kuivamaa is offline  
post #1215 of 2682 (permalink) Old 08-29-2015, 04:58 PM
You broke it!
 
PostalTwinkie's Avatar
 
Join Date: Apr 2012
Location: U.S.A
Posts: 14,194
Rep: 1090 (Unique: 561)
Quote:
Originally Posted by HalGameGuru View Post

AMD does not have the track record of nVidia on making their tech inaccessible or inefficient to their competitors in a premeditated fashion. When TressFX came out and nVidia had trouble they released the code to let nVidia optimize for it. Most AMD tech is made available for the industry as a whole to make use of and optimize, TressFX, FreeSync, Mantle, etc. nVidia could have made use of Mantle if they wished.

doh.gif

AdaptiveSync is the open standard that is "free". FreeSync is AMD's closed and proprietary implementation with their own closed/private validation process. FreeSync itself is anything but open and free, there are still costs associated to it. AMD just doesn't charge a licensing fee from themselves to display their "FreeSync" logo. They claim "free" because they simply ignore all the costs incurred by the manufacturer to get the "FreeSync Approved" stamp.

Your statements really don't mean much.

EDIT:

On topic: It sounds like there is going to be huge potential for games to really, and I mean really, favor one side or the other. At least if Nvidia takes one methodology in their architectures, and AMD takes another. You will then have publishers that will be able to show one an extreme bias.

At least that is a scary thought/possibility; further segmentation of the hardware market and games.
PostalTwinkie is offline  
post #1216 of 2682 (permalink) Old 08-29-2015, 05:15 PM
Linux Lobbyist
 
caswow's Avatar
 
Join Date: Oct 2013
Posts: 451
Rep: 47 (Unique: 24)
Quote:
Originally Posted by PostalTwinkie View Post

doh.gif

AdaptiveSync is the open standard that is "free". FreeSync is AMD's closed and proprietary implementation with their own closed/private validation process. FreeSync itself is anything but open and free, there are still costs associated to it. AMD just doesn't charge a licensing fee from themselves to display their "FreeSync" logo. They claim "free" because they simply ignore all the costs incurred by the manufacturer to get the "FreeSync Approved" stamp.

Your statements really don't mean much.

EDIT:

On topic: It sounds like there is going to be huge potential for games to really, and I mean really, favor one side or the other. At least if Nvidia takes one methodology in their architectures, and AMD takes another. You will then have publishers that will be able to show one an extreme bias.

At least that is a scary thought/possibility; further segmentation of the hardware market and games.

what methodology do devs need to use if they want to make use of nvidias "advanced" architecture? and i mean something really usefull not overtesslation? and what segmentation do you mean? dont you think nvidia will implement more async compute in their next architecture to boost perf? because i think i know what direction you are heading...
caswow is offline  
post #1217 of 2682 (permalink) Old 08-29-2015, 05:17 PM
New to Overclock.net
 
black96ws6's Avatar
 
Join Date: May 2011
Posts: 764
Rep: 96 (Unique: 71)
Yeah but Nvidia actually LOSES performance in DX12, that makes no sense at all.

I look at it this way -

Serial (DX11)
Nvidia - Has 1 guy who has to carry a large rock 100 yards. Trains a lot for this.
AMD - Has 1 guy who has to carry a large rock 100 yards. Doesn't train as much as Nvidia's guy.

Result: Nvidia's guy is faster.

Parallel (DX12)
Nvidia - Still has 1 guy who has to carry a large rock 100 yards.
AMD - Now has 2 guys who carry a large rock 100 yards.

Result: AMD's team is now on par with Nvidia's guy.

That just doesn't make sense from an Nvidia point of view. It's great that AMD's arch works better with DX12, that's great for all of us. But, Nvidia shouldn't LOSE performance just because DX12 is more efficient. That makes no sense. If anything they should GAIN, or at least stay the same.

Even if their current architecture causes DX12 to have to wait for Nvidia's GPU while the AMD guys are passing them, it's still not going to be SLOWER than DX11. Their single guy is still going to carry that rock just as fast. So they don't have 2 guys to carry the rock, so what? But if it's slower or even the same, that would mean DX12 is a complete failure for Nvidia cards, which again, doesn't make sense. It's supposed to be faster for everyone.

I realize this is an extremely simple scenario but hopefully you get the point I'm trying to make.

black96ws6 is offline  
post #1218 of 2682 (permalink) Old 08-29-2015, 05:21 PM
Linux Lobbyist
 
Mahigan's Avatar
 
Join Date: Aug 2015
Location: Ottawa, Canada
Posts: 1,748
Rep: 874 (Unique: 233)
Quote:
Originally Posted by Kuivamaa View Post

So basically Mahigan was spot on?

Oops tongue.gif

Basically Anandtech misled everyone when they stated that Maxwell supported Async compute. That's why Razor1 and I had a hard time on the Hardforums trying to figure out the difference between AMDs ACEs and Maxwell AWSs.

AWSs are not independent. They don't function Asynchronously. This is why I marked them as not having the capability of working "out of order" with error checking. They stall on pipeline dependencies, hence why Oxide disabled the code on nVIDIAs hardware at their request.

I feel vindicated from all the hate mail I've received.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)
Mahigan is offline  
post #1219 of 2682 (permalink) Old 08-29-2015, 05:26 PM
New to Overclock.net
 
dogen1's Avatar
 
Join Date: Aug 2015
Posts: 61
Rep: 4 (Unique: 3)
Quote:
Originally Posted by Kuivamaa View Post

So basically Mahigan was spot on?

Kinda of sort of right.
dogen1 is offline  
post #1220 of 2682 (permalink) Old 08-29-2015, 05:51 PM
Linux Lobbyist
 
semitope's Avatar
 
Join Date: Jul 2013
Location: Florida/Jamaica
Posts: 536
Rep: 32 (Unique: 20)
Quote:
Originally Posted by Kollock View Post

Warning: Spoiler! (Click to show)
AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.
Quote:
Originally Posted by Kollock View Post

Warning: Spoiler! (Click to show)
Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

Thanks for stopping by.

Quote:
I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Figured as much. Other games, especially console ports, might use much more. Honestly, I want a cookie for this. Not the browser kind either. Feels good when wild ignorant speculation is seeming to be correct. biggrin.gif
semitope is offline  
Closed Thread

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off