Originally Posted by CrazyElf
As the other poster has indicated.
The issue is that Nvidia had access to the DX12 code for over a year.http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
This would suggest to me that Nvidia knew what was coming and there hasn't be excessive favoritism here and Nvidia even had the opportunity to contribute to improve performance for their hardware
What Mahigan is saying is that historically, Nvidia has relied heavily on driver based optimizations. That has paid handsome dividends for DX11 performance. However the way they have designed their architecture - serial heavy, means that it will not do as well on DX12, where it more parallel intensive.
The other of course is that there is a close relationship between Mantle, compared with DX12 and Vulkan. AMD must have planned this together and built their architecture around that, even sacrificing DX11 performance (less money spent on DX11 drivers). In other words, if Mahigan's hypothesis is right, they played the long game.
Same here. I would like a bigger sample size to draw a definitive conclusion. See my response to Provost below for my full thoughts - I think that Mahigan's hypothesis is probable, but there are some mysteries.
The full review on TPUhttps://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/
I suppose there's process of elimination. What is the Bulldozer/Steamroller architecture very weak at? Well there's raw single threaded performance and the module design isn't good at floating point, but there's got to be something specific.
The question is, what communicates between the GPU and CPU? That may be a good place to start. Another may be, what has Intel done decisively better?
This is basically where we are at:
- We know that something is causing the DX12 leap in AMD's arch. We don't know what, but Mahigan's hypothesis is the design of AMD's architecture, which they optimized around for DX12, perhaps at the expense of DX11.
- At the moment, AMD is at a drawback and needs that market/mind-share. Combined with GCN consoles, they may have narrowed the gap in their ability to drive future games development.
- The opportunity for driver based optimizations is far more limited in DX12, due to it's "close to metal" nature.
- Nvidia can and will catch up. They have the money and mindshare to do so. The question is when? Pascal? Or is it very compute centric, in which case they may go with Volta.
I would agree that there hasn't been any well researched, well thought out alternative hypothesis. That is not to say that Mahigan's ideas are infallible - they are not, as we still do not have a conclusive explanation as to why the Fury X does not scale very well (and apparently a second mystery now - the AMD CPU's poor performance). Left unresolved, that may require a substantial modification to any hypothesis. Personally I accept that it's the most probable explanation right now.
I think that in the short term, this may help stem the tide for AMD, perhaps a generation or maybe two. But in the long run, they still are at a drawback. They have been cutting R&D money for GPUs and focusing mostly on Zen for example. AMD simply does not have the kind of money to spend. Nvidia is outspending them. In the long run, I fear there will be a reversal if they cannot come up with something competitive.
For AMD though, it's very important they figure out what is the problem, because they need to know where the transistor budget should go for the next generation (although admittedly, if the rumors are true, it's already taped out - it's important to keep in mind that GPUs are designed years in advance).
Remember everyone - it's best to keep 2 GPU vendors that are very competitive with each other. That's when the consumer wins. We want the best performance for a competitive price.
For that reason, I'm hoping that AMD actually wins the next GPU round - and that Zen is a success (IMO, Intel monopoly is also bad for us). A monopoly is a lose for us.
Thank you for a very illuminating post. It's better than the other responses I got that threw something out there with little background info or little logic/support, rep for that alone. Having access for the past year like I said in my post is great, but AMD has been working with Oxide for far longer than that. We know this, and this isn't new information by any means. Let me preface what I'm going to say next with this is all pure conjecture and speculation, but it's one reason why I still have my doubts. Let me also preface with I am not remotely an expert in APIs, merely following logic. When you design something, anything really, you have a groundwork. You have a basis upon which you build everything else upon. If you started this groundwork in collaboration with someone, there is every reason to believe that both worked together to set up and establish this framework; how it runs at a very basic level, what is chosen to execute which types of commands, how they are executed, etc. Now if you are doing so with someone who ALSO happens to be one of the people that will be making use of this going forward, profiting off of it (indirectly) and working as a company to make it as good as possible for themselves, there is going to be opportunity. Opportunity to really show off what your product is capable, opportunity to meld the way things are done going forward. There's also opportunity to potentially make decisions that will SPECIFICALLY benefit YOUR PRODUCTS at a fundamental level. Am I saying it was done with ill intention? Not remotely, you want it to work well, and you want people that buy your product to feel like they made the right decision in choosing you. No harm, no foul on that whatsoever. There is also a darker side to it, potentially. You could also ACTIVELY choose to lay in a framework that INTENTIONALLY benefits your products OVER your competitor. And if it's something that's been laid in at the onset, before anyone else had access to it, by the time it's opened up to others, it may be something that can no longer be changed without tearing down the entire foundation. Am I saying this is the case? No, not at all. Am I saying it is possible? Maybe. Intentional? I don't know, and it doesn't serve to prove anything remotely important had it been intentional or not. But this illustrates my point, because AMD has been tied in with Oxide since at least 2013 according to a Google News search. That means (in theory) they've had at minimum a year's worth of time working with Oxide before NVIDIA or Intel got access to their new engine.
Yes NVIDIA was given access and they do so nicely point out a specific example of where NVIDIA made changes to help improve performance on their hardware. That still doesn't account for the full year minimum AMD had helping develop and lay the groundwork with Oxide. And yes before anyone points it out I see where it says that their requirements include not beinga loss for other hardware implementations and it doesn't drive the engine arch backward. As I've understood it DX12 and Mantle both are reinventing the way games are developed and rendered, providing developers with options and power before untapped and unconsidered, the full extent of which is going to take a good while to fully comb through and make use of. Who's to say that at the onset the choices made were obvious? That it was obvious that going with option A instead of option B would end up costing other competitors potential performance opportunities? Who's to say that Oxide would have been able to tell, or even AMD had it not been intended as such?
Provost post is very thought provoking, and well laid out, but not without potential flaws/questions. Why is it that he only interprets optimizations as meaning either new hardware or incentivizing developers. It may be something far simpler, it may be something far more complex, there is most certainly always another alternative. Most games are based on other makers engines (UE4, Frostbite ETC). It comes down to those engine developers making active decisions regarding the future of their engines based on the market. The market right now is very heavily NVIDIA saturated for better or worse (I'd argue for worse). So, if you're a developer making a new engine, and you want it to run seamlessly on the majority of the market, which would you choose to work with to ensure that? Probably the one that has the most market share, meaning your customers products will run fantastically on their customers hardware. So when a game comes out on an engine that happens to run better on the less established platform, it raises questions (not skepticism, just questions). Is this because that platform happens to be better suited for this new engine and API? If so, fantastic, that's a boon for everyone because it brings that competitor back into the spotlight and lets them try to reclaim market share. But if it's because you've been working with them extensively for several years prior to even launching the game or the engine, then more questions arise.
As you've said, I've said, and I'm sure countless dozens of others have said, we are basing this all on a single pre release game, a single engine. It's the first foray into DX12 and we're trying to draw conclusions and claim victories and try to stem losses. I'm not in favor of any of this garbage. I'm most certainly with you in that I want AMD to succeed, I was on the verge of buying a Fury X and holding off on a 980TI until it came out and personally didn't appeal to me enough. That being said I still see AMD as a very viable option, and will basically recommend AMD at every level but the flagship purely because of price performance. But all we have is a single data point, a single source to draw conclusions from, and that in and of itself invalidates any conclusions we can make from it, it's barely a stone's throw away from conjecture. All I'm adding to it is more conjecture saying let's hold off, let's wait for more to come out, and then we can start drawing conclusions.