Overclock.net - An Overclocking Community - View Single Post - [Various] Ashes of the Singularity DX12 Benchmarks

View Single Post
post #1205 of (permalink) Old 08-29-2015, 03:46 PM
CrazyElf
Meeeeeeeow!
 
CrazyElf's Avatar
 
Join Date: Dec 2011
Location: Ontario, Canada
Posts: 2,223
Rep: 428 (Unique: 299)
Quote:
Originally Posted by PhantomTaco View Post

By all means correct me if I'm wrong but there's a few things I don't understand. For starters are these theories based on the single Ashes of the Singularity benchmark? IIRC the game was developed with AMD helping the dev out. Would it be crazy to assume there were choices made that specifically improved performance for AMD? I'm not saying they necessarily actively made choices that hampered NVIDIA intentionally or even directly, but if true I'd assume some choices made would specifically benefit AMD while not helping, or potentially hurting NVIDIA hardware. Assuming this is all still based on Ashes alone, that's a single engine. There's at least half a dozen other engines out there that either have dx12 support or have it coming that are not necessarily going to behave the same way, so doesn't it seem a bit too early to draw any conclusions based on a sample size of 1?

As the other poster has indicated.

The issue is that Nvidia had access to the DX12 code for over a year.
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
Quote:
Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months. Fundamentally, the MSAA path is essentially unchanged in DX11 and DX12. Any statement which says there is a bug in the application should be disregarded as inaccurate information.

...

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone’s machine, regardless of what hardware our players have.

To this end, we have made our source code available to Microsoft, Nvidia, AMD and Intel for over a year. We have received a huge amount of feedback. For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn’t move the engine architecture backward (that is, we are not jeopardizing the future for the present).


This would suggest to me that Nvidia knew what was coming and there hasn't be excessive favoritism here and Nvidia even had the opportunity to contribute to improve performance for their hardware.



What Mahigan is saying is that historically, Nvidia has relied heavily on driver based optimizations. That has paid handsome dividends for DX11 performance. However the way they have designed their architecture - serial heavy, means that it will not do as well on DX12, where it more parallel intensive.

The other of course is that there is a close relationship between Mantle, compared with DX12 and Vulkan. AMD must have planned this together and built their architecture around that, even sacrificing DX11 performance (less money spent on DX11 drivers). In other words, if Mahigan's hypothesis is right, they played the long game.


Quote:
Originally Posted by PhantomTaco View Post

This, though, does say something. I'm interested to see when UE4 based Ark launches the DX12 patch next week to get some more data points to add. While it is nice to know that they did open the source code up, it doesn't entirely mean it is unbiased. As I recall Oxide games was one of the first to work with AMD on mantle, meaning they had a past track record with AMD working on developing their engine. In that respect it makes me wonder whether or not they still did make choices that specifically benefitted AMD back with mantle that were repeated with Ashes. It also means (in theory at least), that AMD has had more than the past year working with Oxide on this title, whereas Intel and Nvidia have had a year working on it. I'm not calling foul play, but I am still questioning the data until more titles are launched based on different engines.

Same here. I would like a bigger sample size to draw a definitive conclusion. See my response to Provost below for my full thoughts - I think that Mahigan's hypothesis is probable, but there are some mysteries.

Quote:
Originally Posted by Mahigan View Post

Take Battlefield 4, it's a DX11 title that is heavy on draw calls (for a DX11 game):

PCIe 2.0 x8 is saturated already (8 GB/s). Now imagine having all those CPU cores, now available in DX12, making draw calls ontop of the textures etc travelling over the bus? For an AMD system, this is further compounded by the slow HT 3.1 link (12.8GB/s) and that's in the best case scenario (990/FX chipset). If you're using a 970 chipset, you're knocked down to HT 3.0 or 10.4 GB/s. The 3D Mark Overhead API test isn't sending textures either (or any other heavy command), it's only sending draw calls. So it really wouldn't show up on that test.

Again... just a theory.


The full review on TPU
https://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/

I suppose there's process of elimination. What is the Bulldozer/Steamroller architecture very weak at? Well there's raw single threaded performance and the module design isn't good at floating point, but there's got to be something specific.

The question is, what communicates between the GPU and CPU? That may be a good place to start. Another may be, what has Intel done decisively better?


Quote:
Originally Posted by provost View Post

Plus rep, as you have hit the gist of the counter argument on the head.
Mahigan's theory appeals to me because he has gone to great lengths to research and share his opinions as to the why AMD's architecture works better than Nvidia if the developers properly utilize the benefits of Dx12 to reduce the overhead. All I have seen by way of counter argument is why his theory doesn't work due to yet to be seen optimizations for Nvidia, which I interpret as follows:

a) until Nvidia catches up with Pascal architecture or
b) until developers have been incented enough to code away from consumer friendly dx 12 to put the pc gamers in the same position as they were in with dx11, I.e there ain't no such thing as a free (lunch) performance, if you what more performance you got to pay for it. tongue.gif

But, no one has proposed an alternative detailed theory that demystifies the dx 12 performance riddle of the GPU makers tongue.gif

+Rep

This is basically where we are at:
  • We know that something is causing the DX12 leap in AMD's arch. We don't know what, but Mahigan's hypothesis is the design of AMD's architecture, which they optimized around for DX12, perhaps at the expense of DX11.
  • At the moment, AMD is at a drawback and needs that market/mind-share. Combined with GCN consoles, they may have narrowed the gap in their ability to drive future games development.
  • The opportunity for driver based optimizations is far more limited in DX12, due to it's "close to metal" nature.
  • Nvidia can and will catch up. They have the money and mindshare to do so. The question is when? Pascal? Or is it very compute centric, in which case they may go with Volta.

I would agree that there hasn't been any well researched, well thought out alternative hypothesis. That is not to say that Mahigan's ideas are infallible - they are not, as we still do not have a conclusive explanation as to why the Fury X does not scale very well (and apparently a second mystery now - the AMD CPU's poor performance). Left unresolved, that may require a substantial modification to any hypothesis. Personally I accept that it's the most probable explanation right now.

I think that in the short term, this may help stem the tide for AMD, perhaps a generation or maybe two. But in the long run, they still are at a drawback. They have been cutting R&D money for GPUs and focusing mostly on Zen for example. AMD simply does not have the kind of money to spend. Nvidia is outspending them. In the long run, I fear there will be a reversal if they cannot come up with something competitive.

For AMD though, it's very important they figure out what is the problem, because they need to know where the transistor budget should go for the next generation (although admittedly, if the rumors are true, it's already taped out - it's important to keep in mind that GPUs are designed years in advance).




Remember everyone - it's best to keep 2 GPU vendors that are very competitive with each other. That's when the consumer wins. We want the best performance for a competitive price. For that reason, I'm hoping that AMD actually wins the next GPU round - and that Zen is a success (IMO, Intel monopoly is also bad for us). A monopoly is a lose for us.
CrazyElf is offline