Overclock.net › Member Blogs › 3dmark Nvidiadvantage

3dmark Nvidiadvantage

From what is probably the result of having too much time available due to my recent high school graduation, for some reason I had the strange desire to write about something. Writing isn't something I have been particularly interested in doing in the past, but every once in a while a problem or issue will come along that just makes me want to get my opinion out there, and I'm noticing just this while reading about the Nvidia-PhysX-Vantage "scandal" and so I'm offering my take on the situation...

I do want to start by saying that I don't consider myself to be a fanboy. I will admit that I do lean towards the ATI side, but I don't think it's enough to be considered fanboyism, it's more the result of a much better driver experience on Vista than Nvidia has provided with my 8600m.

That being said, the first thing I want to address is accusations of Nvidia bias in the Vantage test, or what I see as the lack of it. I don't really find the Vantage test to be particularly unfair. Typically Futuremark will create the 3DMark product as a forward-looking benchmark and so they incorporate techniques and technologies that they foresee an increasing usage of in game development. Obviously they looked at the technology of physics acceleration and saw that it had potential and they decided to include it in their benchmark. I don't think it's any more complicated than that, and I can't find anything wrong with wanting to benchmark physics acceleration since it was sort of the "hot new thing". But obviously it's not just the mere inclusion of physics benchmarking that's the issue, it's the fact that they used PhysX which of course is now owned by Nvidia. Even so, I still don't think that's necessarily bias. The decision to include PhysX support was probably made before Nvidia announced the purchase. The only time PhysX support could possibly be biased would be if Futuremark refuses to implement a Havok test when a GPU Havok API is releaesed by AMD without giving a reasonable explanation, and chose to exclusively support PhysX. This currently isn't the situation and I fail to see any bias on the part of Futuremark (yet).

Looking at it from the opposite perspective, Nvidia has also been blamed for this "debacle" as well because they used drivers to boost the score significantly. I guess this could somehow be called "cheating" according to the Futuremark's approved driver policy, but I don't think it is. From what I can tell, the driver isn't using a cheap or underhanded trick to boost the scores. Nvidia simply bought PhysX, they have a right to the technology, and they've implemented it onto their GPU products, and all this test is really doing is showing it off. I'm not the biggest fan of Nvidia and their tactics in the video game industry (read: TWIMTBP), but I can't bring myself to honestly call this cheating because I'm just not seeing it. There are plenty of other reasons Nvidia can be demonized for their actions, but this isn't one of them.

I've also seen comments that Futuremark should add a DX10.1 test to make up for ATI's lack of PhysX. Although I'm for a DX10.1 feature test (which would benefit Nvidia cards too because they have some DX10.1 features, just not full compliance), there's no way that it could somehow "replace" a physics test for ATI cards. They aren't even remotely close the same thing. When you run PhysX acceleration alongside GPU rendering, you have both tasks competing for the limited resource of the GPU's power, however DX10.1 is just a different way of doing that rendering regardless of whether or not physics processing is going on. The only thing they really need to do to make it fairer for ATI is include Havok support when it's available, but until that time I don't see a problem with just including PhysX. As I said earlier, I think they should also include DX10.1 feature tests since Assassin's Creed showed us that it does indeed have performance as well as slight image quality benefits.

Another thing is that benchmarking physics and rendering individually is going to give the result when physics has full priority of GPU resources and also when the rendering has full priority. This has the effect of a very inflated score compared to the actual gaming performance that can be expected since in gaming scenarios, the PhysX and rendering will be competing for resources. However, the other way of looking at it is this actually gives an additional metric by which video cards can be compared because you now have the GPU score and the physics score (which is labeled as CPU score).

From what I can tell, a big portion of the outcry is how much of an effect this has on the "total score" which would effectively render Vantage useless to compare between two different systems unless they both have Nvidia. But, it's not like you *have* to test the physics capability. At the very worst, all this does is makes Vantage scores that include Physics as worthless for comparison as 3DMark06 numbers when it was run at non-standard settings or resolution. In my opinion, this whole thing has been blown quite incredibly out of proportion.

Comments

There are no comments yet
Overclock.net › Member Blogs › 3dmark Nvidiadvantage