Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark
New Posts  All Forums:Forum Nav:

[Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark - Page 54

post #531 of 772
Quote:
Originally Posted by Remij View Post

Actually, didn't you say that if anything Oxide was biased towards Nvidia with the Ashes benchmark because they made a IHV specific code path for Nvidia?

I was quoting Kollock from Oxide... he is the one who said that.
Quote:
Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD.
Quote:
Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware.
Source http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995

Full post
Warning: Spoiler! (Click to show)
Quote:
Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

Edited by Mahigan - 7/18/16 at 5:34pm
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
post #532 of 772
Quote:
Originally Posted by Mahigan View Post

I was quoting Kollock from Oxide... he is the one who said that.

Source http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200#post_24356995

Full post
Warning: Spoiler! (Click to show)

My mistake. Sorry for that. redface.gif
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
post #533 of 772
Quote:
3dMark is not a game and does not represent the games at all.

3DMark does not represent the performance of any particular game, no. If 3DMark's performance told us nothing about GPU performance in games, however, no one would use it (except for low-level feature tests, as already covered). I do not expect 3DMark to be the be-all, end-all of game perf, any more than I expect to capture the full range of game performance with any other single benchmark. I don't use 3DMark's high-level gaming tests all that often, as I've already said, though I do appreciate their feature tests and sometimes their early support for emerging technology.

Quote:
It's difficult to build an unbias benchmark with interactivity to actually represent real-world performance

I agree.
Quote:
Also it is really difficult to build a custom engine and custom hardware paths for each architecture with limited money

I agree.
Quote:
Also this product is a free upgrade for those who had already 3dmark and i doubt that many will spent 5 bucks for the upgrade
.

I don't really have a position on this, pro or con.
Quote:
Still i answered all your questions. Still you confuse 3dmark can represent real-world situations. Instead you explain to me about objectivity

Two things:

First, I really *don't* confuse whether or not 3DMark represents real-world situations or the degree to which it does so. I've been around the block a time or two -- long enough to remember when Futuremark 99 was new hotness and the controversy over 3DMark 2000's software T&L engine as compared to NV's hardware T&L performance.

Second, the *point* I was making about objectivity was in direct response to your handwaved ideas about what a "neutral path" benchmark path would look like. Creating a "neutral" benchmark that plays to the strengths of everyone's architectures while simultaneously ensuring identical visual output and keeping everyone honest in the process is not trivial. In fact, it's incredibly difficult for the reasons I've already described in previous posts.
post #534 of 772
Quote:
Originally Posted by DigiHound View Post

...long enough to remember when Futuremark 99 was new hotness and the controversy over 3DMark 2000's software T&L engine as compared to NV's hardware T&L performance.

Funny you should mention that.

MadOnion's implementation was "only fair" to the lowest common denominator, 3dfx and its VSA-100 architecture which lacked hardware T&L. Which is why I've stated elsewhere that 3dmark hasn't been relevant for 15 years.
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
post #535 of 772
Which is why to be a true benchmark it must showcase performance with and without the options and how performance changes based on how much the option is used.

3dmark is officially junk status until they start doing this.
post #536 of 772
Quote:
Originally Posted by STEvil View Post

Which is why to be a true benchmark it must showcase performance with and without the options and how performance changes based on how much the option is used.

3dmark is officially junk status until they start doing this.

Uh, we've actually asked the graphics card vendors about the idea of making vendor-specific code paths in 3DMark, and they have been against it. Such optimizations almost inevitably require subtly altering the rendering output according to the strengths of each architecture. Doable in a game where 5fps more is a fair trade for very minute differences in rendering quality. Not a great idea for a fair, impartial benchmark.

3DMark sends the exact same command lists to every card/driver, and expects exact same rendering output. How else can you fairly asses what each hardware is capable of?
post #537 of 772
By weight testing of course, just like the command queue test is done for DX11/12/Mantle.

Leave time spy how it is but use some smaller tests like the old days of 99max/2000/01 which showed render speed changes based on scene complexity or changes based on code. Less practical to do this now that the bench is released but 3dmark has been getting less practical and useful of comparing hardware between manufacturers with each generation anyways.

Tesselation should be done this way as well...
post #538 of 772
so we soon gonna expect a lot of games to be using nvidia's way of "async" then...
post #539 of 772
also did anyone saw this?

http://forums.anandtech.com/showthread.php?t=2480259&page=5
Quote:
You cannot make a fair benchmark if you start bolting on vendor-specific and even specific generation architecture centered optimizations.

DX12 is a standard. We made a benchmark according to the spec, up to the graphics card vendors how their products work implementing the spec (if they do not follow it, MS won't certify the drivers, so they do follow it).

Beyond that, we will be publishing an official clarification on this issue, probably later today or tomorrow. I fear it won't placate all the people who are going nuts over this with their claims, but we'll do our best.
Quote:
Question is that You do allow IHV to cheat their score?
When benchmark asks GPU to do Parallel Queues (Or doing Concurrency Tasks) , Driver should do exactly what benchmark says not making them into single queues.those maxwell's score are invalid.I have no problem for GCN and Pascal but maxwell , well i can't accept.

When you say I can't do it then your score will be lower than the one that you can do it.
post #540 of 772
Quote:
Originally Posted by FMJarnis View Post

Uh, we've actually asked the graphics card vendors about the idea of making vendor-specific code paths in 3DMark, and they have been against it. Such optimizations almost inevitably require subtly altering the rendering output according to the strengths of each architecture. Doable in a game where 5fps more is a fair trade for very minute differences in rendering quality. Not a great idea for a fair, impartial benchmark.

3DMark sends the exact same command lists to every card/driver, and expects exact same rendering output. How else can you fairly asses what each hardware is capable of?

But but but... They ARE NOT THE SAME ARCHITECTURE ITS ALREADY NOT FAIR. How about including a test in the future with different paths and one single path, 3 complete paths to show each architectures strengths ?

Because when you go generic, AMD hardware is underused smile.gif How is that fair ?
Edited by Catscratch - 7/22/16 at 5:49pm
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark