That's how it's been for a very long time. When Fermi was new, it was massive amounts of tessellation (NV) vs. more restrained use of the feature (AMD). In the DX10 era it was DX10.1 vs 10.0 and debates over image quality and whether or not 10.1 was actually going to be supported. It's been GDDR4 vs. GDDR3, brilinear filtering vs. trilinear filtering (NV lost this one pretty hard, since brilinear filtering was a hack to boost GeForce FX performance). It's been Shader Model 2.0b vs. Shader Model 3. Way back in the day, it was 32-bit vs. 16-bit color -- NV cards had much better 16-bit color than ATI, which tended to have real problems with 16-bit, while ATI's 32-bit color was superior but slower. Heck, I remember when people talked about 2D image quality because it was a differentiation point between companies like Matrox, NV, and ATI.
The reason you're seeing such a knock-down, drag-out fight on various issues of DX12 is because a lot of people don't understand what is and isn't part of the DX12 specification. This leads to allegations of cheating and claims that X or Y doesn't support (or is variously lying) about the ability to perform various types of computation. The degree to which such allegations are true often hinge on exactly what's being claimed and how the larger picture is represented. I still see people claiming that supporting DirectX 12 at Feature Level 12_0 is a damning problem (implying that everyone needs at least FL 12_1) or that GPUs that only implement FL 11_0 are somehow not "really" supported by DX12.
Of course, this is when someone usually pops in and says: "Well, actually those differences could be important in future titles." And they could be! But people are usually more interested in scoring rhetorical points than in understanding underlying technological details -- and there's a big difference between "We might see long-term differences in how games perform based on DX12 feature level support," and "X cheats on Y because it isn't a "real" DX12 GPU." All too often people treat the latter as a stand in for the former, than justify an over-the-top emotional response based on a flawed understanding of the scope of the problem.
Asynchronous compute is kind of a poster child for this. It's been treated as this make-or-break DX12 capability as though it and it alone determines whether or not a game is "actually" DX12 compatible. In reality, asynchronous compute is one type of compute supported in DX12. It's not mandatory to the spec and the hardware-level differences between AMD and NV basically guarantee that a developer who wants to use the feature on Pascal and GCN will have to write two different implementations to do so. NV is apparently locking off async in Maxwell, presumably to safeguard the end-user experience (async compute code and Maxwell have not generally gotten along very well).
People can argue about the extent to which this change would constitute deceptive marketing or a reversal of NV's early messaging on DX12, but from the end-user's perspective, what matters is ensuring high performance and a good experience. If AMD continues to demonstrate strong gains on DX12 it may pick up some market share on the strength of those improvements -- but in both cases, teams Red and Green are playing to their respective strengths.
Edited by DigiHound - 7/18/16 at 10:24am