Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark
New Posts  All Forums:Forum Nav:

[Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark - Page 70

post #691 of 772
Quote:
Originally Posted by ZealotKi11er View Post

The cards that benefit most from ASync in AMDs side are the ones that have the least optimized pipeline. That is the Fury cards. Problem is AMD does not really care much about Fury card at this point. They are not selling them much and are not a priority. With RX 480 I bealive the cards is being fully utilized and has less need for ACE units. Unless Nvidia brings on ASync themselves AMD is going to step it down with Vega.

The point is whether this useless benchmark is showcasing advanced dx12 features that push the gpus, not clinging on to the past. I didn't have a AMD card (in use anyway) when aots came out, but I wasn't upset about it, as long as aots was providing a real benefit using the gpus full capabilities, including compute, and not making up demanding proprietary driver level Api features for the fun of it. Heck, my current Nvidia cards still out number my AMD cards 8:2 (including mobile gpus), but if there is a way to utilize more of the gpus idling resources, that's where the industry should be headed, not the other way around...lol
Simplicity
(11 items)
 
Apotheosis
(10 items)
 
 
CPUMotherboardGraphicsRAM
4770k Asus Z87 Pro TBD Corsair Vengeance (2x8GB) DDR3 1600 RAM 
OSMonitorKeyboardPower
Windows 7 Pro Dell U2713HM Alienware TactX gaming Seasonic 850W Gold  
CaseMouse
Cooler Master HAF XB Alienware TactX premium mouse 
  hide details  
Reply
Simplicity
(11 items)
 
Apotheosis
(10 items)
 
 
CPUMotherboardGraphicsRAM
4770k Asus Z87 Pro TBD Corsair Vengeance (2x8GB) DDR3 1600 RAM 
OSMonitorKeyboardPower
Windows 7 Pro Dell U2713HM Alienware TactX gaming Seasonic 850W Gold  
CaseMouse
Cooler Master HAF XB Alienware TactX premium mouse 
  hide details  
Reply
post #692 of 772
Quote:
Originally Posted by provost View Post

The point is whether this useless benchmark is showcasing advanced dx12 features that push the gpus, not clinging on to the past. I didn't have a AMD card (in use anyway) when aots came out, but I wasn't upset about it, as long as aots was providing a real benefit using the gpus full capabilities, including compute, and not making up demanding proprietary driver level Api features for the fun of it. Heck, my current Nvidia cards still out number my AMD cards 8:2 (including mobile gpus), but if there is a way to utilize more of the gpus idling resources, that's where the industry should be headed, not the other way around...lol

That's not practical, if you wanted the benchmark to run the most advanced dx12 features you would want it to be running dx12 feature level 12_1 which at this time means not a single released AMD card could run since its 12_0, as well as Nvidia Kepler and below. Every currently released dx 12 game is running at feature level 11, and I would say making this a much for fair benchmark since it matches every other released dx 12 game in that regard.
Gaming PC
(10 items)
 
My Xeon
(12 items)
 
 
CPUMotherboardGraphicsRAM
Intel i7-6700k Gigabyte Z170X-Gaming 7 Gigabyte Gtx 970 G1 Corsair Vengeance LPX 16 GB 2666MHz 
Hard DriveHard DriveCoolingOS
Samsung Evo 850 500 GB Seagate 7200 RPM 2 TB Corsair H100i GTX Windows 10 Pro 
PowerCase
EVGA Supernova 1000 G2 Corsair Air 540 
  hide details  
Reply
Gaming PC
(10 items)
 
My Xeon
(12 items)
 
 
CPUMotherboardGraphicsRAM
Intel i7-6700k Gigabyte Z170X-Gaming 7 Gigabyte Gtx 970 G1 Corsair Vengeance LPX 16 GB 2666MHz 
Hard DriveHard DriveCoolingOS
Samsung Evo 850 500 GB Seagate 7200 RPM 2 TB Corsair H100i GTX Windows 10 Pro 
PowerCase
EVGA Supernova 1000 G2 Corsair Air 540 
  hide details  
Reply
post #693 of 772
All this bickering over Time Spy and what it is Vs what people think it should be is moot.

What it is...a synthetic that simulates the demands of theoretical DX12 game code.

What it is not...an unbiased tool for measuring the maximum hardware capability on DX12 at a vendor by vendor level.

The reality is that nearly every game will have some level of preferential code bias. Im ok with Time Spy being what it is. A simulation of a DX12 game. It does in the truest sense of the word use all the required DX12 code to simulate such. Is it unbiased? Well it uses all the required DX12 standards, leaving none out that would hinder either brand. Is it optimized better for one set of hardware? Looks to be that way, but almost all code written with a single path will be optimized for only one path with vendor specific paths possibly supplemented later.

You cant have wildly different apporaches to archeticture not requiring different paths. That's a fact. One we have seen with DX11s huge performace gap that essentially disapears in DX12.

AMDs issue was that DX11 was not capable of utilizing its tech to the fullest and it showed in benchmarks time and again. DX12 is capable of such and it shows when code is optimized to AMD to perform essentially equal. The burden is now on developers. If they choose to hamstring their code for AMD or NV for that matter as is possible too, consumers should point their ier at them.

As for Time Spy, its clearly not optimized to leverage the absolute most out of AMD or NV hardware. NV had the lead going in. They see slight gains and generally keep it. AMD was working at a huge deficiet. They see huge gains and nearly catch up. That makes sense to me. It should to the rest of informed consumers too. Could you make Time Spy run better on both hardware with a fully optimized path? Absolutely. Thats what many people want, but that is not what this claims to be...yet. More DX12 benches are coming. Wait and see.
Edited by gapottberg - 7/20/16 at 8:51am
post #694 of 772
Quote:
Originally Posted by JackCY View Post

In their benchmark info they say 10-20% async work, seems low to me, why have a GPU that is built to do work in parallel when barely anything is done in parallel wink.gif

So running calls asynchronously is now doing work in parallel? Jeeze, the cargo cult around DX12 is getting ridiculous.
post #695 of 772
Quote:
Originally Posted by spiderxtreme View Post

That's not practical, if you wanted the benchmark to run the most advanced dx12 features you would want it to be running dx12 feature level 12_1 which at this time means not a single released AMD card could run since its 12_0, as well as Nvidia Kepler and below. Every currently released dx 12 game is running at feature level 11, and I would say making this a much for fair benchmark since it matches every other released dx 12 game in that regard.
Ok let's cut the nonsense right now. Because the whole 12_1 thing is getting on my nerves. It makes it seem like nVidia's hardware support for DX12 features is superior, but this is FALSE.

The FL12_1 supported by nVidia is not really significant. There is no reason for these two features that make up FL12_1 according to nVidia to not be part of the standard FL12_0 spec, other than giving nVidia bragging rights about how their cards would be superior to AMD's under DX12. Some people even started to argue that nVidia supports DX12.1 compared to AMD's DX12, even though there is no DX12.1. It was a marketing scheme and nothing more. I'm not saying AMD supports those two features. They don't. But the FL12_1 makes it look like the nVidia cards are superior for DX12, when they really aren't, since a lot of the feature levels in the standard FL12_0 are supported on nVidia, but on a lower tier than AMD's, actually making AMD's cards more future proof.

Why am I saying it's a marketing scheme again? Because of all the list of features, combined with the different tiers, there are only two features, conservative rasterization and rasterizer-ordered views, that were required for that additional feature level. Yet, AMD is a full tier higher in resource binding, is a full tier higher in supporting UAVs across all stages, is a full tier higher in resource heap, and supports stencil reference value from pixel shader while nVidia's cards don't. In fact, when you look at Microsoft's WARP12, with 12_1 as the feature level, nVidia fails in 5 of the 11 required features. AMD fails in three of them only. From that perspective, AMD is closer to FL12_1 than nVidia is. In fact, conservative rasterization on nVidia hardware is actually tier 2 compared to the required tier 3 to fully comply to the FL12_1 spec, meaning, they marketed supporting FL12_1 based on only supporting one from the full list; rasterizer-ordered views. Even worse, GCN 1.0 is closer to full FL12_1 support than Pascal. GCN 1.0 is missing 4 features for full FL12_1 support, while Pascal is missing 5.

And remember. AMD supported FL11_1 since GCN 1.0, as in, 2011. Do you know why Maxwell 1 released in 2014 was still FL11_0? Because they only had Tier 2 for the UAVs across all stages feature. And guess what. Pascal STILL does not fully support FL11_1, and yet, nVidia claims FL12_1 support. So yes. Even GCN 1.0 can outshine Pascal due to the lower tier of nVidia's UAV support. nVidia is behind and that's a fact.

That's without touching the new stuff specifically added with Polaris...
post #696 of 772
Quote:
Originally Posted by NightAntilli View Post

Ok let's cut the nonsense right now. Because the whole 12_1 thing is getting on my nerves. It makes it seem like nVidia's hardware support for DX12 features is superior, but this is FALSE.
Warning: Spoiler! (Click to show)
The FL12_1 supported by nVidia is not really significant. There is no reason for these two features that make up FL12_1 according to nVidia to not be part of the standard FL12_0 spec, other than giving nVidia bragging rights about how their cards would be superior to AMD's under DX12. Some people even started to argue that nVidia supports DX12.1 compared to AMD's DX12, even though there is no DX12.1. It was a marketing scheme and nothing more. I'm not saying AMD supports those two features. They don't. But the FL12_1 makes it look like the nVidia cards are superior for DX12, when they really aren't, since a lot of the feature levels in the standard FL12_0 are supported on nVidia, but on a lower tier than AMD's, actually making AMD's cards more future proof.

Why am I saying it's a marketing scheme again? Because of all the list of features, combined with the different tiers, there are only two features, conservative rasterization and rasterizer-ordered views, that were required for that additional feature level. Yet, AMD is a full tier higher in resource binding, is a full tier higher in supporting UAVs across all stages, is a full tier higher in resource heap, and supports stencil reference value from pixel shader while nVidia's cards don't. In fact, when you look at Microsoft's WARP12, with 12_1 as the feature level, nVidia fails in 5 of the 11 required features. AMD fails in three of them only. From that perspective, AMD is closer to FL12_1 than nVidia is. In fact, conservative rasterization on nVidia hardware is actually tier 2 compared to the required tier 3 to fully comply to the FL12_1 spec, meaning, they marketed supporting FL12_1 based on only supporting one from the full list; rasterizer-ordered views. Even worse, GCN 1.0 is closer to full FL12_1 support than Pascal. GCN 1.0 is missing 4 features for full FL12_1 support, while Pascal is missing 5.

And remember. AMD supported FL11_1 since GCN 1.0, as in, 2011. Do you know why Maxwell 1 released in 2014 was still FL11_0? Because they only had Tier 2 for the UAVs across all stages feature. And guess what. Pascal STILL does not fully support FL11_1, and yet, nVidia claims FL12_1 support. So yes. Even GCN 1.0 can outshine Pascal due to the lower tier of nVidia's UAV support. nVidia is behind and that's a fact.

That's without touching the new stuff specifically added with Polaris...

I'm well aware of this and that wasn't the point of my post. I was commenting on how people are upset that Time Spy is running with FL 11_0 and saying they are holding back progress with the benchmark. When in my opinion its makes perfect sense since every game released on DX 12 those far is currently running at that level as well.
Gaming PC
(10 items)
 
My Xeon
(12 items)
 
 
CPUMotherboardGraphicsRAM
Intel i7-6700k Gigabyte Z170X-Gaming 7 Gigabyte Gtx 970 G1 Corsair Vengeance LPX 16 GB 2666MHz 
Hard DriveHard DriveCoolingOS
Samsung Evo 850 500 GB Seagate 7200 RPM 2 TB Corsair H100i GTX Windows 10 Pro 
PowerCase
EVGA Supernova 1000 G2 Corsair Air 540 
  hide details  
Reply
Gaming PC
(10 items)
 
My Xeon
(12 items)
 
 
CPUMotherboardGraphicsRAM
Intel i7-6700k Gigabyte Z170X-Gaming 7 Gigabyte Gtx 970 G1 Corsair Vengeance LPX 16 GB 2666MHz 
Hard DriveHard DriveCoolingOS
Samsung Evo 850 500 GB Seagate 7200 RPM 2 TB Corsair H100i GTX Windows 10 Pro 
PowerCase
EVGA Supernova 1000 G2 Corsair Air 540 
  hide details  
Reply
post #697 of 772
Quote:
Originally Posted by ZealotKi11er View Post

The cards that benefit most from ASync in AMDs side are the ones that have the least optimized pipeline. That is the Fury cards. Problem is AMD does not really care much about Fury card at this point. They are not selling them much and are not a priority. With RX 480 I bealive the cards is being fully utilized and has less need for ACE units. Unless Nvidia brings on ASync themselves AMD is going to step it down with Vega.

You are bringing up Async every time but i think it is false assumption that "proper" async will automatically give huge boost for AMD. Performance increase by turning on async for AMD gpus on TS pretty much falls in line with AoTS and Doom, even if AoTS and Doom are "supposedly" running higher percentage of Async compute.
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
post #698 of 772
Quote:
Originally Posted by Kpjoslee View Post

You are bringing up Async every time but i think it is false assumption that "proper" async will automatically give huge boost for AMD. Performance increase by turning on async for AMD gpus on TS pretty much falls in line with AoTS and Doom, even if AoTS and Doom are "supposedly" running higher percentage of Async compute.

In all fairness, there was an easy way to eliminate the need for the assumption whether it be right or wrong and show exactly what GCN is capable of.

I think there is ample evidence to suggest that it is not being used to its fullest potential. You can say that its close enough to not make that big a difference but that innitself is merely an assumption on your part.

The evidence will be in the data. There will be a benchmark in the future, whether timespy or otherwise, that will show evidence of GCN being used as optimally as possible. When that happens we will have real data to add to the very small amount we have now.
Edited by gapottberg - 7/20/16 at 1:59pm
post #699 of 772
Despite AMD receiving such huge boosts in Vulkan, I do wonder how optimized it is for Vulkan. They made it for OpenGL first after all... The Async here nets us great gains, but could it be even bigger if a game is built solely based on Vulkan?

As for TimeSpy, using FL11_0 is meh. It's using the DX12 API to re-use DX11 features with tacked-on low level async. So... Meh.
post #700 of 772
I'd imagine that ACE units are key if you want GPUs to be all purpose compute adapters, if they indeed balance the load between graphics and compute between the available shaders.

Let's see if Nv joins in on this strategy to make GPUs compute engines, or if they would rather continue to relegate the task to low core count CPUs (sub 10 seems to be a trend for mainstream CPUs) with silly low flops performance.

This benchmark is a good example of all the different things that people want Dx12 to be, without a clear vision of what it could be. At the end of the day we'll have to wait and see what actual game devs do with the proposed general compute unit concept found in Dx12 (edit: also found in HSA/AMD APUs), and I vaguely expect that Nv would at least provide some rudimentary hardware support for such load balancing (of requests sent from compute and graphics queue) in volta. But who knows!

edit: and yeah, this isn't really about some boost for AMD at all. It's about 'is there a reason to let game devs use the GPU for all things compute, utilizing potentially a large share of the GPU shaders, as opposed to putting that stuff on the CPU', and building hardware around it. If it's a good idea, then you'd probably want to have something like AMD's ACE things, so game devs just need to load up the graphics and compute queue to tap into the compute capabilities of the card. The card needs to be able to know by itself how to balance the graphics and compute load across its shaders, preferably without having to rely on a driver running on the CPU. But yeah maybe Nv doesn't bite, in favor of its proprietary cuda business. But I'm pretty sure that if Nv does end up offering actual hardware support for async compute, it'd change a fair bit, with regard to how games are written. Do they want that? Can they stop it? Would they mind if it happened even if they don't exactly want it? Nobody knows.
Edited by Tivan - 7/20/16 at 2:55pm
Cute PC
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930k@4200 Sabertooth x79 R9 290 Tri-X@950/1250 4x4GB@2133CL9 
Hard DriveCoolingOSMonitor
Crucial BX100 Mugen 4 Win7 Benq xl2411z 
MonitorKeyboardPowerCase
NEC EA231WMi QPad-MK50 (reds) Seasonic S12G 750 Define R4  
MouseMouse PadAudio
Deathadder 3.5G BE Razer Goliathus Speed Edition Large Onboard 
  hide details  
Reply
Cute PC
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930k@4200 Sabertooth x79 R9 290 Tri-X@950/1250 4x4GB@2133CL9 
Hard DriveCoolingOSMonitor
Crucial BX100 Mugen 4 Win7 Benq xl2411z 
MonitorKeyboardPowerCase
NEC EA231WMi QPad-MK50 (reds) Seasonic S12G 750 Define R4  
MouseMouse PadAudio
Deathadder 3.5G BE Razer Goliathus Speed Edition Large Onboard 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [Various] Futuremark Releases 3DMark Time Spy DirectX 12 Benchmark