Overclock.net › Forums › Industry News › Video Game News › [WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD
New Posts  All Forums:Forum Nav:

[WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD - Page 76

post #751 of 799
Quote:
Originally Posted by caswow View Post

in beta2 that was tested for today you can turn async on and off.

kepler and maxwell just cant handle context switching. no async for kepler/maxwell.

If that's the case, why isn't there a performance penalty for Nvidia when it is on? The Anandtech test showed negative scaling but this one shows nothing.
post #752 of 799
Quote:
Originally Posted by Potatolisk View Post

I'm surprised in a little difference between DX11 and DX12 for 980ti in Ashes benchmark. But the game itself is a lot more heavier on CPU than the benchmark itself. And even nVidia cards would see a significant increase from less CPU overhead.

It's good to see Fury X doing so well. It was expected when they added significant (over 15%) async compute. But nVidia might get some boost too when they release their software emulation.

Asynchronous compute was turned off for NVIDIA hardware because NVIDIA hardware cannot run Asynchronous Compute + Graphics concurrently and the Ashes of the Singularity Async path makes use of that feature. This can result in negative scaling.

NVIDIA have implemented their version of Async compute in their latest drivers. The problem is that it is an NVIDIA specific implementation which benefits from Compute concurrent executions only (no Compute + Graphics support).

Kollock mentioned that we'd have to talk to NVIDIA about their specific implementation.
Edited by Mahigan - 2/24/16 at 11:55am
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
post #753 of 799
IF this translates over to hitman:
Quote:
To take advantage of DirectX 12 native multi-GPU, you don't need any special drivers. Simply enable multi-GPU in the settings and the game will use the card with the monitor connected as primary and the next GPU to improve rendering performance.



Ashes of the Singularity DirectX 12 Mixed GPU Performance

looks like i'm taking my 780TI back out of the closet. thumb.gif
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
post #754 of 799
Quote:
Originally Posted by Forceman View Post

If that's the case, why isn't there a performance penalty for Nvidia when it is on? The Anandtech test showed negative scaling but this one shows nothing.

nvidia is probably sending it all through the graphics queue.
post #755 of 799
isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
post #756 of 799
Quote:
Originally Posted by zealord View Post

isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?
It's a real game
It's actually pretty fun too.
post #757 of 799
Quote:
Originally Posted by semitope View Post

nvidia is probably sending it all through the graphics queue.

They are but within the Graphics queue, the order of execution is not defined hence (Asynchronous). Compute commands can be executed concurrently here on NVIDIA hardware. This is because NVIDIA don't have dedicated engines for Compute and for Graphics. Either work is processed within an SMM. In the SMM both compute and graphic work share an L1 cache. Since their share cache, they're processed in sequence. It's the same reason why a context switch hurts NVIDIA performance, a cache flush is required to switch between compute and graphics within an SMM.

AMD, on the other hand, has a higher level of hardware redundancy (and higher power usage as a result). AMD can split the work across 3 seperate queues which can execute concurrently and in parallel. Fences can be used to enforce synch points between queues. The CUs are also seperate from the render units. So not only is work executed concurrently and in parallel but also processed that way. A context switch therefore only takes one cycle on GCN.

As for Ashes, they're not using poor man's Async compute so context switching is not at fault here. What is at fault is the fact that graphics and compute cannot be executed concurrently. So the NVIDIA driver has to rebundle work items, keeping compute work in one batch and graphic work in another.
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
post #758 of 799
Quote:
Originally Posted by zealord View Post

isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?

The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.
post #759 of 799
Quote:
Originally Posted by Kollock View Post

The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.

Oh I thought it was like a game that was specifically designed to show how certain DX12 features perform.

But its great that it is actually a real game. Makes me look forward to DX12 in games like Quantum Break, Hitman, Gears of Wars and the likes smile.gif
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
post #760 of 799
Quote:
Originally Posted by Kollock View Post

The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.
Wasn't Dan Baker the first to use Multi-threaded command listing ans deferred rendering in CIV5 as well?
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Video Game News
Overclock.net › Forums › Industry News › Video Game News › [WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD