Overclock.net › Forums › Benchmarks › Benchmarking Software and Discussion › [Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also...
New Posts  All Forums:Forum Nav:

[Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also... - Page 8

post #71 of 253
Quote:
Originally Posted by Remij View Post




Anyway, where the heck were you guys back in AMD's DX11 days? Why weren't you making the same noise back then? Why weren't you complaining to AMD themselves the same way you are about developers now? Did you just accept that AMD's architecture wasn't being utilized to it's full potential by AMD's own fault?
[.

I myself was on the green side.
Bartholomew I
(15 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-4770K Gigabyte GA-Z87X-UD3H STRIX-R9390-DC3OC-8GD5-GAMING Corsair Vengeance Pro  
Hard DriveHard DriveOptical DriveCooling
Kingston SSDNow V300 Western Digital WD Green WD10EADS Lite-On Super AllWrite IHAS124-04  Cooler Master: Hyper 212 EVO 
OSMonitorKeyboardPower
Microsoft Windows 10 Pro 64-bit  Dell UltraSharp U2312HM  Genius GX Imperator I  Corsair AX760  
CaseMouseOther
Cooler Master: HAF 932 Advanced Bloody Multi-Core Gun 3 V8 Thrustmaster Ferrari F430 Force Feedback Racing... 
  hide details  
Reply
Bartholomew I
(15 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-4770K Gigabyte GA-Z87X-UD3H STRIX-R9390-DC3OC-8GD5-GAMING Corsair Vengeance Pro  
Hard DriveHard DriveOptical DriveCooling
Kingston SSDNow V300 Western Digital WD Green WD10EADS Lite-On Super AllWrite IHAS124-04  Cooler Master: Hyper 212 EVO 
OSMonitorKeyboardPower
Microsoft Windows 10 Pro 64-bit  Dell UltraSharp U2312HM  Genius GX Imperator I  Corsair AX760  
CaseMouseOther
Cooler Master: HAF 932 Advanced Bloody Multi-Core Gun 3 V8 Thrustmaster Ferrari F430 Force Feedback Racing... 
  hide details  
Reply
post #72 of 253
Quote:
Originally Posted by Remij View Post

I agree fully... but it's never been that way, and I'm not expecting it to change over night just because DX12/Vulkan are here now.

AMD has lots of work yet to do with developers.

Futuremarks site says they have developed the benchmark with the input of AMD, Nvidia, Intel and Microsoft among others. Now, I'm asking a question here because I really don't know, but what kind of certification does Futuremark have to go through with AMD and Nvidia? Why would AMD not show them the best way to do things and program for it?

Also, I would love to hear AMD speak on their thoughts about the benchmark.

You may want to look at the whole practice of Futuremark.

We all know AMD is bad with tessellation. And if you modify tess, your 3Dmark/Firestrike score is invalid, fair enough.

Now we know nVidia is bad with A-sync on, and they disables A-sync from their driver. Their Timespy score however, is still valid even for the A-sync test. So what is happening here?

Let's look more into the code path for the "A-sync" in this bench, which only optimizes the parallel computing. It is different from the A-sync shaders used in Dx12/Vulkan, which focus more on parallel graphics. This so-called "A-sync" in Timespy is tailored for nVidia's software scheduler, to make sure that even Maxwell can look good in this bench. It has close to none relation to their performance in Dx12/Vulkan games, period.

So is it still fair??? I said NO.
Edited by blue1512 - 7/18/16 at 4:38pm
post #73 of 253
Quote:
Originally Posted by kaosstar View Post

It makes sense in a benchmark to use technologies which will equally benefit both cards. That's objectivity. Sure, they could implement an AMD-specific technology like parallel async compute, but that would explicitly benefit AMD cards. If they do that, they might as well massively tesselate every surface in the benchmark and default to 64x tesselation, to benefit Nvidia.

It's difficult (impossible?) to make a benchmark that's fair to everyone, but you need to try to use technologies that are "standard" to most of the hardware, that don't clearly favor one hardware over another.

It's funny you mention tesselation, because that's what this reminds me of. Why shouldn't the benchmark be able to have a selectable level of tesselation (i.e. 2 to 64x) and a selectable level of async compute? Why is it that the async is capped to a lower limit, and the upside is never explored?
post #74 of 253
I believe this image contains all the info needed to confirm TimeSpy is NOT valid DX12 benchmark.

post #75 of 253
http://forums.anandtech.com/showpost.php?p=38362194&postcount=46

"Intel, AMD and NVIDIA are all part of Benchmark Development Program. They have source code read access and they can suggest changes and give feedback (with the feedback public within BDP, so any changes they suggest have to be accepted by the other vendors as well while Futuremark retains final say as to what goes into the benchmark)."
post #76 of 253
It's a benchmark and it is almost irrelevant for the gaming community. Just relax.

Also the thing that nvidia paid for the benchmark to be this way it's a big joke.
Workstation
(4 items)
 
  
CPUMotherboardGraphicsMonitor
Xeon E5-2690 Supermicro 2011 Nvidia GP100/ Vega FE Dell ultrasharp 4k 
  hide details  
Reply
Workstation
(4 items)
 
  
CPUMotherboardGraphicsMonitor
Xeon E5-2690 Supermicro 2011 Nvidia GP100/ Vega FE Dell ultrasharp 4k 
  hide details  
Reply
post #77 of 253
Made an account to specifically give my two cents on the situation, since everyone else is, I may as well too.

Firstly, a benchmark is supposed to determine (roughly) how a certain gpu compares to everything else out there. People seem to forget that the number one rule for synthetic benchmarks like Futuremark's timespy is that the performance you get is not necessarily fully representative of the performance you get in real world gaming scenarios. I use 3dmark benchmarks for the sole purpose of overclocking stability, and thats about it, and I usually ignore the scores anyway.

Secondly, we must ask the question. Is a "benchmark" technically "compromised/gimped/inaccurate" if it under utilizes a certain gpu/arch/brand while another is fully utilized? Well, you can view this a few different ways. In one view, it is compromised, as it is making amd hardware perform not as well as it could/should, while nvidia's hardware is not suffering or being looked at as inferior. But in another view (that most people usually miss or understate the importance of) is the developers. A developer must take the time and resources needed to implement this async compute capabilites into their game engines. Its not something that just comes "default" by a simple port to DX12 (think when dx11 first came out, most game engines at first were pretty much ported to dx11 from dx10, and most of its capabilites not used, ex: Battlefield Bad Company 2). To utilize async compute, is the time and energy worth it, it could be an easy thing to implement, I don't know, I'm not a hardware/software genius and it is definitely beyond my brain capabilities. But what I do know is that not every developer will choose to use this feature, whether its time, or if its because why use something that one side of your player base get a performance increase, while the other gets little to no boost, or potentially even worse performance.

My thinking, Futuremark made a decision to take async (mostly) out of the equation due to inconsistency with implementations of async. If your going to use it as a "benchmark" then why have it use something that may or may not be in every dx12 game. So much for a benchmark when in the benchmark you see higher fps on an amd gpu, then in a dx12 game you see higher fps on a nvidia gpu, doesn't make a whole lot of sense. Not saying what they did was right, but I don't mind since I don't actually use it as a benchmark, and never did, I use the games I play as benchmarks, not a synthetic one.

Also as a side note, Nvidia's implementation of async seems to be less effective than AMD's implementation, so that's on Nvidia's engineers, not on the developers or microsoft.

Conclusion: Use the games you play as benchmarks instead of this, as async effectiveness will vary between dx12 game, and that is a fact.
post #78 of 253
Quote:
Originally Posted by Remij View Post

But you guys thinking that every developer will spend time and resources making two completely different code paths to take all the advantage of each hardware are just simply KIDDING YOURSELVES. If a developer can design a single code-path that works well enough on both architectures.. they're going to use it.

Both AMD and Nvidia agree that if you aren't going to write a path for each vendor, then just use DX11. Because otherwise you aren't optimizing and they can't optimize for you due to drivers not being able to control the rendering path as much.

post #79 of 253
Quote:
Originally Posted by TheMaxXHD View Post

Made an account to specifically give my two cents on the situation, since everyone else is, I may as well too.

Firstly, a benchmark is supposed to determine (roughly) how a certain gpu compares to everything else out there. People seem to forget that the number one rule for synthetic benchmarks like Futuremark's timespy is that the performance you get is not necessarily fully representative of the performance you get in real world gaming scenarios. I use 3dmark benchmarks for the sole purpose of overclocking stability, and thats about it, and I usually ignore the scores anyway.

Secondly, we must ask the question. Is a "benchmark" technically "compromised/gimped/inaccurate" if it under utilizes a certain gpu/arch/brand while another is fully utilized? Well, you can view this a few different ways. In one view, it is compromised, as it is making amd hardware perform not as well as it could/should, while nvidia's hardware is not suffering or being looked at as inferior. But in another view (that most people usually miss or understate the importance of) is the developers. A developer must take the time and resources needed to implement this async compute capabilites into their game engines. Its not something that just comes "default" by a simple port to DX12 (think when dx11 first came out, most game engines at first were pretty much ported to dx11 from dx10, and most of its capabilites not used, ex: Battlefield Bad Company 2). To utilize async compute, is the time and energy worth it, it could be an easy thing to implement, I don't know, I'm not a hardware/software genius and it is definitely beyond my brain capabilities. But what I do know is that not every developer will choose to use this feature, whether its time, or if its because why use something that one side of your player base get a performance increase, while the other gets little to no boost, or potentially even worse performance.

My thinking, Futuremark made a decision to take async (mostly) out of the equation due to inconsistency with implementations of async. If your going to use it as a "benchmark" then why have it use something that may or may not be in every dx12 game. So much for a benchmark when in the benchmark you see higher fps on an amd gpu, then in a dx12 game you see higher fps on a nvidia gpu, doesn't make a whole lot of sense. Not saying what they did was right, but I don't mind since I don't actually use it as a benchmark, and never did, I use the games I play as benchmarks, not a synthetic one.

Also as a side note, Nvidia's implementation of async seems to be less effective than AMD's implementation, so that's on Nvidia's engineers, not on the developers or microsoft.

Conclusion: Use the games you play as benchmarks instead of this, as async effectiveness will vary between dx12 game, and that is a fact.

then it should be open source. They are not useful for anyone like this. there are already gpu heavy games to test stability. 3dmark can be so much more.
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
post #80 of 253
Quote:
Originally Posted by blue1512 View Post

You may want to look at the whole practice of Futuremark.

We all know AMD is bad with tessellation. And if you modify tess, your 3Dmark/Firestrike score is invalid, fair enough.

Actually you are allowed to tweak/modify tess and it is still valid score (on a site where the scores actually mean something):

*Under Allowed optimisations

http://hwbot.org/news/9039_application_52_rules/
http://hwbot.org/news/9664_application_58_rules/
http://hwbot.org/news/11440_application_138_rules/
Super P's rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
5960x ASUS X99-A II Asus GTX 1080 Ti Corsair Vengeance DDR4 3000 
Hard DriveHard DriveHard DriveOptical Drive
MyDigitalSSD BPX NVMe Samsung 850 EVO Seagate Momentus XT 500 GB External DVDRW 
CoolingCoolingOSMonitor
EK-XLC Predator 240 Swiftech 240mm Radiator Windows 10 Samsung 40" 4K - UN40KU6290 
KeyboardPowerCaseMouse
G710+ EVGA SuperNOVA 850G2 Fractal Design Define S G700s 
Mouse PadAudioAudioAudio
Vipamz Extended XXXL Asus U7 M-Audio AV40 Sennheiser HD 439 
  hide details  
Reply
Super P's rig
(20 items)
 
  
CPUMotherboardGraphicsRAM
5960x ASUS X99-A II Asus GTX 1080 Ti Corsair Vengeance DDR4 3000 
Hard DriveHard DriveHard DriveOptical Drive
MyDigitalSSD BPX NVMe Samsung 850 EVO Seagate Momentus XT 500 GB External DVDRW 
CoolingCoolingOSMonitor
EK-XLC Predator 240 Swiftech 240mm Radiator Windows 10 Samsung 40" 4K - UN40KU6290 
KeyboardPowerCaseMouse
G710+ EVGA SuperNOVA 850G2 Fractal Design Define S G700s 
Mouse PadAudioAudioAudio
Vipamz Extended XXXL Asus U7 M-Audio AV40 Sennheiser HD 439 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
Overclock.net › Forums › Benchmarks › Benchmarking Software and Discussion › [Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also...