Originally Posted by Majin SSJ Eric
Seems to me that this is all just a failure of DX12 itself to adhere to one standard regardless of vendor. Features like Async should just be part of the API whether or not certain hardware can take advantage of it. It shouldn't be left to vendors or developers to pick and choose which feature set they want to implement, it should all be standard. Now of course I know very little about API's and how they work so maybe this comes off as sounding stupid to the guys that really understand this stuff but it seems to me (an admitted layman) that DX12's implementation is far more fractured and sloppy than DX11's was, which is a shame considering I thought the point of it was to be low level and simple to utilize.
It just doesn't make sense to offer all kinds of differing features that may or may not ever be supported and just leave it up to everyone else to pick and choose which ones they will utilize a la carte.
TL;DR - There should just be one unified feature set of DX12 that includes everything and either hardware will be compatible or it won't. It makes much more sense why Nvidia didn't even bother to include certain features on a hardware level now since they knew they could simply muscle everybody to code for them first anyway though...
Low level and simple can't co-exist for distinct architectures. AMD and Nvidia have different circuits for providing same workload (Graphics/gaming).
Both AMD and Nvidia uses different architecture to provide same type of workload i.e graphics/gaming. Techniques to get maximum efficiency from an AMD GPU differ from techniques which is used to get maximum efficiency from a Nvidia GPU and vice versa.
I don't know if it is a good analogy but, lets say our goal/workload/task is to remove screws from something as fast as possible, the most efficient and fastest way to remove screws is use compatible screwdriver(+ or - or * type screwdrivers for + or - or * type screw respectively ) but what if we don't have screwdriver at that particular moment !?, of course we can use a spoon instead or a flat sharp piece of metal or we can shape some piece of metal into a compatible looking screwdriver that should fit into that specific type of screw at a certain degree that finally allows us to unscrew the screw. But I think you and I both agree that a screwdriver would've been the best way to do that work and achieve the goal(remember unscrewing as fast as possible) in terms of handling and processing speed of unscrewing and it doesn't require extra energy/resource that we were using with spoon or piece of metal to turn them into compatible looking screwdriver
Both AMD and Nvidia use different architectures (different screws in term of shape/size/design) but their goal is same (here goal is getting the game run as fast as possible). Now is the point to understand Low Level (I think this is the only thing what you are misunderstanding when you said low level and simple). Low level for AMD means using the right methods (compatible screwdrivers) to get the job done(unscrew = gaming as fast as possible in terms of fps and ofcourse smoothness as well), so is for Nvidia. All that means is low level methods that are optimal for AMD GPUs will not work for Nvidia and vice versa. But you can create hack/workaround (remember spoon etc) to do the job. (This is exactly Nvidia is doing with all these AMD Async supported games, but it is slower in terms of energy/resource used per fps, obviously 980 is faster than 280x but you know what I mean )
You can say DX9/10/11 were the weird shaped screwdrivers(very heavy to hold on your hands, kinda out of shape that requires adjusting while you unscrewing or ...or visualize anything troubling that seriously get into your nerve ) , i.e. more coding, more resource consuming, holding you back to do your own creativity, various limitations created by Microsoft. AMD and Nvidia had to create or optimize through drivers whatever piece of architecture they have. But the Microsoft screwdriver is not being liked by some Game developers, these games developers wanted more control on how to render graphics (unscrewing with smooth fluidity or with steady rate, or they wanted to unscrew more screws at a time at that old Microsoft screwdriver is not letting them to do that).
Instead, they designed a new techniques, this is where DX12 comes in, but we have two types of GPU makers, AMD and Nvidia(architecture makers). Since you have already understood that low level thing, that "low level and simple" just can't co exist for different architectures.
AMD implemented type A into DX12 (of course with the help of core level game developers) so that AMD GPUs can be optimally utilized and game developers are also happy as they have designed the API and have full control on it at hardware level as well (GFLOPS of a certain GPU is different talk, of course you understand that, I can not ride my bicycle faster than a motor bike even if I have full control on my bicycle lol),
Nvidia implemented type B into DX12 (of course with the help of core level game developers) so that Nvidia GPUs can be optimally utilized. I am not being biased here at all. Both GPUs are working optimally in their level of DX12.
The only thing that is creating this war between fanboys because many developers think that AMD has somewhat much better type of hardware in terms of getting the controls of something specific and overall in terms of multitasking, multithreading within minimum amount of time or parallelization of different types of loads which are independent of each other (asynchronous), result is more work can be done per unit of resource (resulting more graphics details, more number of everything, etc). Since you have seen that most DX12 games that have released till now were designed with AMD technologies in mind by the developers (this is where we say they used AMD render path).
The thing here is 3DMark did not use AMD's render path. I 100% agree and support them and It is perfectly fine that they used Nvidia's render path as Nvidia is using perfect screwdriver for their screw. Why did they not even provide options to set for compatible GPU and let the users/us compare the scores, despite the leading AAA companies have used AMD's render path. This is I think extremely debatable. And hence some of us are skeptical of 3DMarks honesty on giving unbiased benchmarking tool to PC gamers.
In my opinion, the most hot topic/question of 2016 is who will developers be using render path of between these two companies ?
Nvidia's huge market share that accounts for ~70+% ?
AMD GPUs better techniques/architecture of handling load what developers secretly want ?Edited by sumitlian - 7/20/16 at 12:27pm