Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark
New Posts  All Forums:Forum Nav:

[Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark - Page 69

post #681 of 772
Quote:
Originally Posted by Mahigan View Post

Because games were called to have multiple execution paths (IHV specific paths) during the last GDC by AMD and nVIDIA (as well as Microsoft).

If 3DMark does not incorporate IHV specific paths then what is the use of 3DMark? If 3DMark does not mirror what games are doing then what is it good for? What can it tell us about the hardware and how it will behave under DX12 titles?

You are siding with the Corporations... and while many gamers may be red or green team... at the end of the day what they care about is getting to play their games without being shafted due to Corporate agreements between some big game studio and a specific IHV.

If you look at the damage Gameworks has done to several consumers (Project Cars comes to mind) then I guess you can see why many folks are reluctant to accept this sort of behavior.

As for me *knowing better*... are we not the consumers? Are we not the ones purchasing these products? Are we not the ones who were shafted by the GTX 970 claiming 4GB memory but coming equipped with 3.5GB or the ones who bought an RX 480 thinking it was a 150W card only to find that it consumed considerably more?

Who buys 3DMark? AMD and nVIDIA? or Gamers?

I honestly don't see the problem as both AMD and Nvidia doesn't seem to have any problem with it. It probably means they don't see the problem with how their GPUs are getting represented with those scores. If AMD or Nvidia doesn't have any problem with it, why should it be any problem for us? They are ones who are going to be either gaining or losing from this, not us.
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
post #682 of 772
From PR standpoint I understand why they made one code path and why AMD and nVidia ok-ed that. But the benchmark with different code paths would be more useful. Now it's not that different from other games. It's biased in it's own way and won't tell us the best possible performance you can get with GPUs.
post #683 of 772
Quote:
Originally Posted by Potatolisk View Post

From PR standpoint I understand why they made one code path and why AMD and nVidia ok-ed that. But the benchmark with different code paths would be more useful. Now it's not that different from other games. It's biased in it's own way and won't tell us the best possible performance you can get with GPUs.
but isnt the benchmarks and more importantly a benchmark tailored to dx12 to showcase the strengh of dx12 instead of the cards?
what is the point to make certain cards to look better instead of actually pushing the boundaries?
post #684 of 772
Quote:
Originally Posted by Exilon View Post

The drama comes from a camp of people very upset that FM didn't use intrinsic shaders for AMD, or that they didn't just run AOTS in a wrapper. There's also another group that's confusing Doom's gains from async compute with gains from intrinsic shaders, but that's whole 'nother story.

Wrong
post #685 of 772
Quote:
Originally Posted by FMJarnis View Post

So why do both AMD and NVIDIA specifically ask us (Futuremark) to *not* do vendor-specific paths, as it would make 3DMark less useful to them?

They don't seem to be calling for a synthetic benchmark that has multiple execution paths, but I assume you know better.
They asked you to do that with DX12, an API they ask game developers to use separate code paths for? Or are you reading from the DX11 agreement ?
post #686 of 772
Quote:
Originally Posted by PlugSeven View Post

They asked you to do that with DX12, an API they ask game developers to use separate code paths for?

Yes. This was discussed during Time Spy development.

I guess at this point you just outright refuse to believe our detailed and fully open statements on how Time Spy works and how it was developed with the Benchmark Development Program members. That is, I guess, your right. At this point I'm not sure if there is anything I can do to convince you. Maybe you should ask the vendors themselves?
post #687 of 772
Quote:
Originally Posted by Kpjoslee View Post

I honestly don't see the problem as both AMD and Nvidia doesn't seem to have any problem with it. It probably means they don't see the problem with how their GPUs are getting represented with those scores. If AMD or Nvidia doesn't have any problem with it, why should it be any problem for us? They are ones who are going to be either gaining or losing from this, not us.
Nvidia certainly wont mind this at all and AMD usually don't air this kind of laundry or just don't give a hoot.
post #688 of 772
Who gives a darn about Futuremark's useless Dx11 wannabe Dx12 implementation...
And who cares about fair this or fair that, if the PC gaming industry is being held back because someone's hardware isn't capable. If it's a hardware issue, I will gladly throw out my hardware if it means we can finally move on to a better PC gaming environment where we are not tied to DX 11 limitations and relying on same old, same old proprietary APIs that quite frankly took the fun out of PC gaming for me anyway...lol
Simplicity
(11 items)
 
Apotheosis
(10 items)
 
 
CPUMotherboardGraphicsRAM
4770k Asus Z87 Pro TBD Corsair Vengeance (2x8GB) DDR3 1600 RAM 
OSMonitorKeyboardPower
Windows 7 Pro Dell U2713HM Alienware TactX gaming Seasonic 850W Gold  
CaseMouse
Cooler Master HAF XB Alienware TactX premium mouse 
  hide details  
Reply
Simplicity
(11 items)
 
Apotheosis
(10 items)
 
 
CPUMotherboardGraphicsRAM
4770k Asus Z87 Pro TBD Corsair Vengeance (2x8GB) DDR3 1600 RAM 
OSMonitorKeyboardPower
Windows 7 Pro Dell U2713HM Alienware TactX gaming Seasonic 850W Gold  
CaseMouse
Cooler Master HAF XB Alienware TactX premium mouse 
  hide details  
Reply
post #689 of 772
Quote:
In all Futuremark benchmarks we aim for neutrality by ensuring that all hardware is treated equally. Every device runs the same workload using the same code path. This is the only way to produce results that are fair and comparable.
Fail right there and not just with low level APIs.

Sure it allows you to compare how different HW will run one single code path to render a fixed scene.
But it would be better if the benchmark did this: compare how effective/fast different HW will be to render a fixed scene. Which means differing code paths for differing hardware and pushing each to it's limit and taking advantages of it's possibilities to achieve equal visual result. It also means avoiding disadvantages same way devs do all the time with Nvidia HW.
Quote:
In the past, we have discussed the option of vendor-specific code paths with our development partners, but they are invariably against it. In many cases, an aggressive optimization path would also require altering the work being done, which means the test would no longer provide a common reference point. And with separate paths for each architecture, not only would the outputs not be comparable, but the paths would be obsolete with every new architecture launch.
Yadadada. And how often there is an architecture change? Every 5+ years? Nvidia is just adding tweaks to what they have and so is AMD adding tweaks to their GCN now. There is a common reference point it's called a fixed scene, the visuals on screen are equal as much as possible. It doesn't matter how some on screen effect is achieved as long as it is there and it is of the same visual quality.
Quote:
Ultimately, 3DMark aims to predict the performance of games in general. To accomplish this, it needs to be able to predict games that are heavily optimized for one vendor, both vendors, and games that are fairly agnostic. 3DMark is not intended to be a measure of the absolute theoretical maximum performance of hardware.
Ok I get it, they have different business target for their benchmark than what I would like to personally see. They are trying to emulate game benchmarks which is moot since you can just batch run game benchmarks of current games instead on real games you will PLAY.
Where as by consumers benchmarks are often used to compare hardware and see which one has better capabilities and that is something 3DMark never shows properly since it doesn't use any HW to it's max.
Quote:
In Time Spy, asynchronous compute is used heavily to overlap rendering passes to maximize GPU utilization. The asynchronous compute workload per frame varies between 10-20%.
To observe the benefit on your own hardware, you can optionally choose to disable async compute using the Custom run settings in 3DMark Advanced and Professional Editions.
In their benchmark info they say 10-20% async work, seems low to me, why have a GPU that is built to do work in parallel when barely anything is done in parallel wink.gif

And with DX12 they suddenly put pressure on drivers to make the decision to do work in parallel when they only use 1 queue when async is disabled. I guess it could be forced on driver level by splitting and reordering the stuff they submit... kind of like, hey developer you messed up let me fix that for ya same way they had to do in DX11.


Yeah well their target is making money, not pushing each hardware to it's max. Kind of a pointless benchmark as always unfortunately.
Stunning visual demo, cool to look at, that's about it. But then you can just play demo scene demos.
Edited by JackCY - 7/20/16 at 5:33am
post #690 of 772
Quote:
Originally Posted by Kpjoslee View Post

I honestly don't see the problem as both AMD and Nvidia doesn't seem to have any problem with it. It probably means they don't see the problem with how their GPUs are getting represented with those scores. If AMD or Nvidia doesn't have any problem with it, why should it be any problem for us? They are ones who are going to be either gaining or losing from this, not us.

The cards that benefit most from ASync in AMDs side are the ones that have the least optimized pipeline. That is the Fury cards. Problem is AMD does not really care much about Fury card at this point. They are not selling them much and are not a priority. With RX 480 I bealive the cards is being fully utilized and has less need for ACE units. Unless Nvidia brings on ASync themselves AMD is going to step it down with Vega.
Ishimura
(21 items)
 
Silent Knight
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel Core i7 3770K @ 4.6GHz ASRock Z77E-ITX eVGA GTX 1080 Ti Hybrid AVEXIR Blitz 1.1 16GB DDR3-2400MHz CL10 
Hard DriveHard DriveCoolingCooling
SanDisk Ultra II 960GB Toshiba X300 5TB Corsair H100i GTX eVGA Hybrid Water Cooler  
CoolingOSMonitorKeyboard
4x GentleTyphoon AP-15 Windows 10 Pro 64-Bit Philips Brilliance BDM4065UC 4K Razer BlackWidow Chroma  
PowerCaseMouseMouse Pad
eVGA SuperNOVA 750 G3 Define Nano S Logitech G502 Proteus Core PECHAM Gaming Mouse Pad XX-Large 
AudioAudioAudioAudio
Audioengine D1 DAC Mackie CR Series CR3 Audio-Technica ATH-M50 Sennheiser HD 598 
Audio
Sony XB950BT 
CPUMotherboardGraphicsRAM
AMD Phenom II X4 955 @ 4.2GHz ASUS M4A79XTD EVO AMD Radeon HD 7970 3GB @ 1200/1500 2x 4GB G.SKILL Ripjaws X DDR3-1600 
Hard DriveHard DriveHard DriveCooling
OCZ Agility 3 60GB WD Caviar Green 1.5TB 2 x Seagate Barracuda 2TB XSPC Raystorm 
CoolingCoolingOSPower
EK-FC7970 XSPC RS360 Windows 10 Pro 64-Bit Corsair TX750 
Case
NZXT Switch 810  
  hide details  
Reply
Ishimura
(21 items)
 
Silent Knight
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel Core i7 3770K @ 4.6GHz ASRock Z77E-ITX eVGA GTX 1080 Ti Hybrid AVEXIR Blitz 1.1 16GB DDR3-2400MHz CL10 
Hard DriveHard DriveCoolingCooling
SanDisk Ultra II 960GB Toshiba X300 5TB Corsair H100i GTX eVGA Hybrid Water Cooler  
CoolingOSMonitorKeyboard
4x GentleTyphoon AP-15 Windows 10 Pro 64-Bit Philips Brilliance BDM4065UC 4K Razer BlackWidow Chroma  
PowerCaseMouseMouse Pad
eVGA SuperNOVA 750 G3 Define Nano S Logitech G502 Proteus Core PECHAM Gaming Mouse Pad XX-Large 
AudioAudioAudioAudio
Audioengine D1 DAC Mackie CR Series CR3 Audio-Technica ATH-M50 Sennheiser HD 598 
Audio
Sony XB950BT 
CPUMotherboardGraphicsRAM
AMD Phenom II X4 955 @ 4.2GHz ASUS M4A79XTD EVO AMD Radeon HD 7970 3GB @ 1200/1500 2x 4GB G.SKILL Ripjaws X DDR3-1600 
Hard DriveHard DriveHard DriveCooling
OCZ Agility 3 60GB WD Caviar Green 1.5TB 2 x Seagate Barracuda 2TB XSPC Raystorm 
CoolingCoolingOSPower
EK-FC7970 XSPC RS360 Windows 10 Pro 64-Bit Corsair TX750 
Case
NZXT Switch 810  
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark