Overclock.net › Forums › Benchmarks › Benchmarking Software and Discussion › [Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also...
New Posts  All Forums:Forum Nav:

[Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also... - Page 7

post #61 of 253
Quote:
Originally Posted by infranoia View Post

You are not describing a benchmark.

Sure, games will go either way depending on who's in bed with whom. But a benchmark is supposed to be above all that, and yes, it is supposed to inform a customer's purchase. It's a benchmark. Either it's completely fair, or it should be removed from consideration.

You know, it's weird to me. I agree with you, except that what I see is a fairly well balanced benchmark. AMD cards see gains with async compute. And Nvidia drivers disable async on Maxwell gpus and perform the same instructions except serially so that they aren't losing performance.

They're doing the same amount of work to render the scene, and Maxwell has to do it serially and thus doesn't see any gains. But that's not what people care about. You're wanting Maxwell cards to be gimped and to show how they stall when trying to run async and you guys want that to be represented in the score, right?

However, in my opinion, that is just as much of an unfair representation of how those GPUs could run that scene as it would be if async was disabled on AMD GPUs. Bias would be not allowing Async to be used at all with the test on either vendors cards.

Again this goes back to people thinking DX12 = async compute. I agree that AMDs GPUs should be taken fully advantage of as much as possible, but it's not as simple as Async = on, or Async = off.

My whole point was that even when parts of the rendering are using async compute and it is being utilized to some degree, people wont be satisfied until they see proof that AMDs architecture is pumping through as much as possible at all times.

It's gonna take time for devs to adopt it fully. Stop the witch-hunting... be happy when things implement it and you can see improvements where you can. Take pride in the games that AMD really gets involved in and you see huge gains and things that can be done. Stop focusing on the negative when other developers cant or don't show the same amounts of improvement.

And finally... people SERIOUSLY have to stop comparing gains from other games and engines to what they think they should be seeing in others. I'm sure there's no need to explain why this is completely ridiculous. tongue.gif
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
post #62 of 253
Quote:
Originally Posted by xboxshqip View Post

LOL but here it is the funny part, why should AMD emulate something that has hardware support for it?
As the op said AMD can do both .parallel and switch, Nvidia can do only switch.

So who does this Benchmark favorites?

It makes sense in a benchmark to use technologies which will equally benefit both cards. That's objectivity. Sure, they could implement an AMD-specific technology like parallel async compute, but that would explicitly benefit AMD cards. If they do that, they might as well massively tesselate every surface in the benchmark and default to 64x tesselation, to benefit Nvidia.

It's difficult (impossible?) to make a benchmark that's fair to everyone, but you need to try to use technologies that are "standard" to most of the hardware, that don't clearly favor one hardware over another.
Workhorse
(19 items)
 
AMD Daily Driver
(15 items)
 
 
CPUMotherboardGraphicsRAM
AMD FX8320e @ 4.8 Ghz asus sabretooth 1080 ti g-skill ares 1866mhz 16gb 
Hard DriveHard DriveCoolingOS
Visiontek 240gb ssd western digital 750gb hard drive Enermax Liqmax II 240 Windows 10 Pro 
OSMonitorMonitorKeyboard
Fedora 25 Asus VH238H HTC Vive CM Storm Devastator 
PowerCaseMouse
PC Power & Cooler Silencer 910W DIYPC Gamerstorm-BK CM Storm Devastator 
  hide details  
Reply
Workhorse
(19 items)
 
AMD Daily Driver
(15 items)
 
 
CPUMotherboardGraphicsRAM
AMD FX8320e @ 4.8 Ghz asus sabretooth 1080 ti g-skill ares 1866mhz 16gb 
Hard DriveHard DriveCoolingOS
Visiontek 240gb ssd western digital 750gb hard drive Enermax Liqmax II 240 Windows 10 Pro 
OSMonitorMonitorKeyboard
Fedora 25 Asus VH238H HTC Vive CM Storm Devastator 
PowerCaseMouse
PC Power & Cooler Silencer 910W DIYPC Gamerstorm-BK CM Storm Devastator 
  hide details  
Reply
post #63 of 253
Quote:
Originally Posted by criminal View Post

So what's the solution? Relabel the benchmark as DX11? Pull it and re-release it with more async? I don't care either way as it is just for fun anyway. Might be kinda cool to see a true DX12 benchmark even if it ran like dog on Nvidia cards. biggrin.gif
Don't start.
edit their page pointing that it uses asynchronous compute, which is clearly misleading


http://www.futuremark.com/pressreleases/introducing-3dmark-time-spy-directx-12-benchmark-test

clearly this can be taken as "Nvidia can do asynchronous compute" marketing tool
Quote:
Originally Posted by HMBR View Post

the problem I think is that 3dmark should not really try to go either way, it should just follow what games are trying to do... time will tell.
Make different path for each vendor to keep it unbiased?
Edited by PontiacGTX - 7/18/16 at 4:02pm
post #64 of 253
Quote:
Originally Posted by Remij View Post

You know, it's weird to me. I agree with you, except that what I see is a fairly well balanced benchmark. AMD cards see gains with async compute. And Nvidia drivers disable async on Maxwell gpus and perform the same instructions except serially so that they aren't losing performance.

They're doing the same amount of work to render the scene, and Maxwell has to do it serially and thus doesn't see any gains. But that's not what people care about. You're wanting Maxwell cards to be gimped and to show how they stall when trying to run async and you guys want that to be represented in the score, right?
...rest of post... (Click to show)
However, in my opinion, that is just as much of an unfair representation of how those GPUs could run that scene as it would be if async was disabled on AMD GPUs. Bias would be not allowing Async to be used at all with the test on either vendors cards.

Again this goes back to people thinking DX12 = async compute. I agree that AMDs GPUs should be taken fully advantage of as much as possible, but it's not as simple as Async = on, or Async = off.

My whole point was that even when parts of the rendering are using async compute and it is being utilized to some degree, people wont be satisfied until they see proof that AMDs architecture is pumping through as much as possible at all times.

It's gonna take time for devs to adopt it fully. Stop the witch-hunting... be happy when things implement it and you can see improvements where you can. Take pride in the games that AMD really gets involved in and you see huge gains and things that can be done. Stop focusing on the negative when other developers cant or don't show the same amounts of improvement.

And finally... people SERIOUSLY have to stop comparing gains from other games and engines to what they think they should be seeing in others. I'm sure there's no need to explain why this is completely ridiculous. tongue.gif

No, no-- I want NOTHING to be gimped! Nobody gets gimped! I'm an engineer, and I'm probably just a bit OCD. I want any piece of silicon, regardless of vendor, to be used at its maximum capacity. That's not what 3dmark is doing though, is it?

Every single GPU needs its own render path. That's how DX12 is intended to work, by Microsoft's own direction and guidance on the issue.

At the same time, both Microsoft and the IHVs state that if you don't want to do architecture-specific optimized render paths, then you probably shouldn't be using DX12 at all, (and that includes Futuremark!)
Edited by infranoia - 7/18/16 at 4:11pm
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
post #65 of 253
Quote:
Originally Posted by xboxshqip View Post

Here Quote "Does DOOM support asynchronous compute when running on the Vulkan API?

Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set.

Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon."
https://community.bethesda.net/thread/54585?tstart=0
Let's hope so. More power for everyone.
post #66 of 253
Quote:
Originally Posted by xboxshqip View Post

LOL but here it is the funny part, why should AMD emulate something that has hardware support for it?
As the op said AMD can do both .parallel and switch, Nvidia can do only switch.

So who does this Benchmark favorites?

I wasn't particularly talking about benchmark lol, but I would say this. If it was programmed around AMD's method, it would have been accused of favoring around AMD's GPU.
It is difficult matter because in non-vendor specific code, AMD will suffer more most of time against Nvidia since AMD's architecture requires higher amount of optimization to increase its utilization.
Way the DX12 is designed, it is almost impossible to make benchmark without optimization without being unfair to either GPU vendor because it requires specific code path to each. I think this 3Dmark is just a start. We might soon need a separate benchmark suite for each GPU and only comparison would be with games.
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
post #67 of 253
Actually it's the NV hardware that requires special illogical handling, but since it's been so much pushed to everyone with DX9-11 devs are used to those optimizations and ways of making engines, NV supported AAA games, they get NV personnel help em out, etc.
On the other hand those developing for consoles... well deal with different architectures and don't have to jump trough those NV hoops and loops to get decent performance.
post #68 of 253
Quote:
Originally Posted by Kpjoslee View Post

I wasn't particularly talking about benchmark lol, but I would say this. If it was programmed around AMD's method, it would have been accused of favoring around AMD's GPU.
It is difficult matter because in non-vendor specific code, AMD will suffer more most of time against Nvidia since AMD's architecture requires higher amount of optimization to increase its utilization.
Way the DX12 is designed, it is almost impossible to make benchmark without optimization without being unfair to either GPU vendor because it requires specific code path to each. I think this 3Dmark is just a start. We might soon need a separate benchmark suite for each GPU and only comparison would be with games.

It's gonna be a better place when vendors acknowledge different manufacturers. 3dmark is in a special place. If they wanted to, they have THE TIME, they could have paths for each architecture because it's just a few minutes of render work. They could show developers what each architecture can do in separate ways, they could still have custom options for cross platform benefits like heavy tess or async shaders etc. They can really really be useful but they chose to be just generic.
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
Intel Evilnow
(18 items)
 
   
CPUMotherboardGraphicsRAM
i5 2500k 4ghz @ Offset -0.015 Asus P8P67 Evo (bios 3207) Sapphire 280x Tri-x 3GB OC (Stock 1020/1500 Non... G.Skill RipjawsX 2x4gb 1866mhz 9-10-9-28-2n @ 1.5v 
Hard DriveHard DriveHard DriveHard Drive
SHSS37A120G WD5000AAKX-001CA0 WD20EARX WD20EZRZ 
Hard DriveOptical DriveCoolingOS
WD5001AALS-00L3B2 (Now External) ASUS DRW-1814BLT Noctua NH-u12p SE2 Windows 10 Pro 
MonitorKeyboardPowerCase
Asus VH242H Wobbly Stand :) Microsoft Ergo 4000 Enermax Infiniti 650 (28a,28a,30a) Cooler Master haf 912 Advanced 
MouseOther
A4tech x7 F3 Sunbeam RHK-EX-BA Rheobus-Extreme Fan Controlle... 
CPUMotherboardGraphicsRAM
Phenom II x6 1090t BE 3.6/4.0 Turbo@def.volt MSI K9A2 Platinum v1 Sapphire HD6850 1GB 850/1100@def.volt Kingston 2x2gb Hyperx 1066 5-5-5-15 
Hard DriveHard DriveOptical DriveOS
Western Digital WD5001AALS Seagate Barracuda ST3250410AS Asus DRW-1814BLT Windows 7 Ultimate x64 SP1 
MonitorKeyboardPowerCase
Asus VH242H 23.6" Wobbly Stand :D Microsoft Ergo 4000 Enermax Infiniti 650w (28a,28a,30a) Thermaltake Kandalf SuperTower 
Mouse
A4 tech Swop-3 
  hide details  
Reply
post #69 of 253
Quote:
Originally Posted by Remij View Post

Anyway, where the heck were you guys back in AMD's DX11 days? Why weren't you making the same noise back then? Why weren't you complaining to AMD themselves the same way you are about developers now? Did you just accept that AMD's architecture wasn't being utilized to it's full potential by AMD's own fault?

Admiring Mantle and hoping that mainstream serially-scheduled APIs would go in that direction. Brute-force is well and good and Nvidia absolutely mastered it, but massive and generational graphics advancement requires massively parallel systems and an API to feed them.

/1,000 biggrin.gif
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
Parasite
(18 items)
 
  
CPUMotherboardGraphicsGraphics
i7 4770K @ 4.7GHz Z87 MPOWER (MS-7818) Sapphire Radeon 290x @1100/1500 EVGA 1080Ti SC2 Hybrid 
RAMHard DriveHard DriveCooling
G.SKILL 2133 Samsung 850 Pro Caviar Black Corsair H100 
CoolingCoolingOSMonitor
Corsair HG10 Corsair H60 Windows 7 x64 Sony XBR65X850B 
KeyboardPowerCaseMouse
CMSTORM Quickfire XT Corsair AX1200i Antec P280 Logitec G700 
Mouse PadAudio
Black, came with my NeXTcube 25 years ago. Sound Blaster Recon 3D PCIe 
  hide details  
Reply
post #70 of 253
Quote:
Originally Posted by infranoia View Post

No, no-- I want NOTHING to be gimped! Nobody gets gimped! I'm an engineer, and I'm probably just a bit OCD. I want any piece of silicon, regardless of vendor, to be used at its maximum capacity. That's not what 3dmark is doing though, is it?

Every single GPU needs its own render path. That's how DX12 is intended to work, by Microsoft's own direction and guidance on the issue.

I agree fully... but it's never been that way, and I'm not expecting it to change over night just because DX12/Vulkan are here now.

AMD has lots of work yet to do with developers.

Futuremarks site says they have developed the benchmark with the input of AMD, Nvidia, Intel and Microsoft among others. Now, I'm asking a question here because I really don't know, but what kind of certification does Futuremark have to go through with AMD and Nvidia? Why would AMD not show them the best way to do things and program for it?

Also, I would love to hear AMD speak on their thoughts about the benchmark.
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
Overclock.net › Forums › Benchmarks › Benchmarking Software and Discussion › [Various] Futuremark's Time Spy DirectX 12 "Benchmark" Compromised. Less Compute/Parallelism than Doom/Aots. Also...