Overclock.net › Forums › Industry News › Video Game News › [Gamers Nexus] Gears of War 4 PC Graphics Settings Detailed & Async Compute Support
New Posts  All Forums:Forum Nav:

[Gamers Nexus] Gears of War 4 PC Graphics Settings Detailed & Async Compute Support - Page 2

post #11 of 26
Thread Starter 
Quote:
Originally Posted by PontiacGTX View Post

Xbox port.
if parrallel is done with pauses and concurency it is Nv

Thanks for the clarification on that. It was my understanding that amd does sequential.


Quote:
Originally Posted by Hl86 View Post

So Unreal Engine 4 gets multi gpu support now?
Found this back in '14
Quote:
Hello,

Out of the box, if you wish to use all the features of UE4, you should forgo SLI.

The most common form of SLI is know as AFR (Alternate Frame Rendering) where each GPU handles a different frame. [ This is what we did with Samaritan on UE3 - 3 GPU's each handling a different frame ].

The deferred rendering techniques used by UE4 rely on data from the previous frame to render the current frame and as a result are not SLI friendly. You could investigate which features are needed for SLI and potentially avoid them, however since that is not a usecase we have here at Epic, i'm not sure how well it will work as we keep extending UE4 with new functionality.
https://answers.unrealengine.com/questions/21746/does-the-ue4-engine-support-sli.html#

They are using MDA (multi-display adapter)
Edited by EastCoast - 8/18/16 at 10:32am
post #12 of 26
Quote:
Originally Posted by EastCoast View Post

Thanks for the clarification on that. It was my understanding that amd does sequential.
.
AMD Does asynchronous compute+graphics in parrallel


Nvidia does it concurrently and has to change contexts instead doing it in parrallel
http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958


Basically AMD is doing asynchronous shader in parrallel and Nvidia is doing Pre Emption

Edited by PontiacGTX - 8/18/16 at 9:45am
post #13 of 26
Thread Starter 
Quote:
Originally Posted by PontiacGTX View Post

AMD Does asynchronous compute+graphics in parrallel


Nvidia does it concurrently and has to change contexts instead doing it in parrallel
http://www.overclock.net/t/1605674/computerbase-de-doom-vulkan-benchmarked/220#post_25351958


Basically AMD is doing asynchronous shader in parrallel and Nvidia is doing Pre Emption
Thanks for that. I was reading up on this and found the following:
Quote:
Attempting to execute graphics and compute concurrently on the GTX 980 Ti causes dips and spikes in performance and little in the way of gains. Right now, there are only a few thread counts where Nvidia matches ideal performance (latency, in this case) and many cases where it doesn’t. Further investigation has indicated that Nvidia’s asynch pipeline appears to lean on the CPU for some of its initial steps, whereas AMD’s GCN handles the job in hardware.

Right now, the best available evidence suggests that when AMD and Nvidia talk about asynchronous compute, they are talking about two very different capabilities. “Asynchronous compute,” in fact, isn’t necessarily the best name for what’s happening here. The question is whether or not Nvidia GPUs can run graphics and compute workloads concurrently. AMD can, courtesy of its ACE units.

It’s been suggested that AMD’s approach is more like Hyper-Threading, which allows the GPU to work on disparate compute and graphics workloads simultaneously without a loss of performance, whereas Nvidia may be leaning on the CPU for some of its initial setup steps and attempting to schedule simultaneous compute + graphics workload for ideal execution. Obviously that process isn’t working well yet. Since our initial article, Oxide has since stated the following:

“We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”

Here’s what that likely means, given Nvidia’s own presentations at GDC and the various test benchmarks that have been assembled over the past week. Maxwell does not have a GCN-style configuration of asynchronous compute engines and it cannot switch between graphics and compute workloads as quickly as GCN. According to Beyond3D user Ext3h:

“There were claims originally, that Nvidia GPUs wouldn’t even be able to execute async compute shaders in an async fashion at all, this myth was quickly debunked. What become clear, however, is that Nvidia GPUs preferred a much lighter load than AMD cards. At small loads, Nvidia GPUs would run circles around AMD cards. At high load, well, quite the opposite, up to the point where Nvidia GPUs took such a long time to process the workload that they triggered safeguards in Windows. Which caused Windows to pull the trigger and kill the driver, assuming that it got stuck.

Final result (for now): AMD GPUs are capable of handling a much higher load. About 10x times what Nvidia GPUs can handle. But they also need also about 4x the pressure applied before they get to play out there capabilities.”
http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

So if thats correct both can do async concurrently however amd can handing more load then nv and do it in parallel. While nv can hand lighter loads or as you say with pauses but done only concurrently. IE: preemption: switching between graphics and compute. Which isn't true async compute from a multi thread POV.

So, with them saying that GOW4 will "allowing scripts to execute in parallel, rather than sequentially" is pretty vague until we see benchmarks to find out how the engine is really working IMO.
Edited by EastCoast - 8/19/16 at 4:00pm
post #14 of 26
Quote:
Originally Posted by EastCoast View Post

Thanks for that. I was reading up on this and found the following:
http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far

So if thats correct both can do async concurrently however amd can handing more load then nv. While nv can hand lighter loads or as you say with pauses. IE: preemption: switching between graphics and compute. Which isn't async at that point.

So, with them saying that GOW4 will "allowing scripts to execute in parallel, rather than sequentially" is pretty vague until we see benchmarks to find out how the engine is really working IMO.
they can do either way
2 different path for each vendor, one optimized AMD asynchronous compute+graphics in parrallel and 2nd uses the asynchronous compute+graphics with concurrency for Nvidia
1 path for both vendor which can use asynchronous compute+graphics for Both like 3dmark, this doesnt fully use GCN as much the asynchronous compute+graphics in parrallel
post #15 of 26
Thread Starter 
Quote:
Originally Posted by PontiacGTX View Post

they can do either way
2 different path for each vendor, one optimized AMD asynchronous compute+graphics in parrallel and 2nd uses the asynchronous compute+graphics with concurrency for Nvidia
1 path for both vendor which can use asynchronous compute+graphics for Both like 3dmark, this doesnt fully use GCN as much the asynchronous compute+graphics in parrallel

Which makes the real question if GOW4 UE4 works truely concurrently or if it works preemptively. The benchmarks, once revealed should make that very clear.

If AMD wins it means that the game uses a more parallel workload. Async compute parallel for graphics+compute. Which makes for a heavier workload. If Nvidia wins it means that game uses a more concurrently workload that uses preemption/context switching, pausing, etc making it a lighter workload.
Edited by EastCoast - 8/19/16 at 4:21pm
post #16 of 26
Quote:
Originally Posted by EastCoast View Post

Which makes the real question if GOW4 UE4 works truely concurrently or if it works preemptively. The benchmarks, once revealed should make that very clear.

If AMD wins it's concurrently with a heavy workload. If Nvidia wins its preemptively with a light, pausing, workload.
if it using Paxwell they can win or tie amd in similar scenario you ca see how GTX 1060 now match Rx 480
post #17 of 26
Thread Starter 
Quote:
Originally Posted by PontiacGTX View Post

if it using Paxwell they can win or tie amd in similar scenario you ca see how GTX 1060 now match Rx 480

I think we all can benefit from the understanding of this wild, wild west we call DX12 async compute. How it's being explained and how its being marketed.
It's by no means a single approach and I think that needs to be made very clear here. Also, DX12 does not mandate a particular implementation of it either.

So to be clear about this what we need to know before will pull the trigger on these dx12 games if it will work with ideal performance on your hardware. Will dx12 Async compute work in parallel which AMD uses (normal to have a heavier work load) or concurrently which Nvidia uses (can only do light loads) with pausing/preemption/context switching. I think more then ever benchmarks will highlight that as IMO even with driver updates the overall performance might not change dramatically enough to make much of a difference.

I wonder if we can get the in game benchmark as a separate download?

Edit:
Wasn't concurrent workloads, preemption/context switching introduced in Dx11/11.1? If so its now being carried over into dx12 and they are calling it async compute confused.gif
Edited by EastCoast - 8/19/16 at 4:07pm
post #18 of 26
Thread Starter 
Quote:
Originally Posted by FattysGoneWild View Post

Native DX12 from the ground up on PC? Not buying it for a second. In case I am wrong and they actually pull it off. I would be shocked. It should be the most jaw dropping game to look at on PC EVER!

It uses UE4 so I have no idea why it wouldn't be compatible for win7/win8/win8.1.
post #19 of 26
Thread Starter 
post #20 of 26
Quote:
Originally Posted by EastCoast View Post

It uses UE4 so I have no idea why it wouldn't be compatible for win7/win8/win8.1.

DX12 is only on W10. Unless they allow a DX11 version of the game which I highly doubt. This game is going to be locked to W10. Plus, probably a UWP too which is not on the previous Windows OSes either.
     
CPUMotherboardGraphicsGraphics
5960X Gigabyte X99-UD3P GTX 1080Ti FTW3 (soon) GTX 1080 Ti SC (soon) 
RAMHard DriveHard DriveHard Drive
GSkill TridentZ 32GB 3200MHz Samsung 840 Pro 256GB Crucial m4 512GB Seagate Barracuda 3TB 
CoolingCoolingCoolingCooling
EK EVO Monsoon Premium Bay Res v.2/ Vero D5 pump insta... HWLab Black Ice 360/ 3x eLoops in Push/Pull Phobya 200/ Cooler Master Mega Flow Fan in Push 
OSMonitorKeyboardPower
Windows 7 Ultimate 64-bit Overlord 1440P Corsair K70 Vengeance Corsair AX1200W 
CaseMouse
Cooler Master Haf X Modded Logitech G500s 
CPUGraphicsGraphicsHard Drive
1900X (soon) GTX960 FTW @ 1552MHz / Stock volts GTX1070 SC Intel SSD 
CoolingCoolingOSPower
Corsair H60 Corsair H55/Kraken G10 on GTX960 Mint Mate 18 Corsair AX1200i 
Case
Fractal Define S 
CPUOSCase
Intel Core i3 Ubuntu 14.04LTS Toshiba Satellite Ultrabook 
  hide details  
Reply
     
CPUMotherboardGraphicsGraphics
5960X Gigabyte X99-UD3P GTX 1080Ti FTW3 (soon) GTX 1080 Ti SC (soon) 
RAMHard DriveHard DriveHard Drive
GSkill TridentZ 32GB 3200MHz Samsung 840 Pro 256GB Crucial m4 512GB Seagate Barracuda 3TB 
CoolingCoolingCoolingCooling
EK EVO Monsoon Premium Bay Res v.2/ Vero D5 pump insta... HWLab Black Ice 360/ 3x eLoops in Push/Pull Phobya 200/ Cooler Master Mega Flow Fan in Push 
OSMonitorKeyboardPower
Windows 7 Ultimate 64-bit Overlord 1440P Corsair K70 Vengeance Corsair AX1200W 
CaseMouse
Cooler Master Haf X Modded Logitech G500s 
CPUGraphicsGraphicsHard Drive
1900X (soon) GTX960 FTW @ 1552MHz / Stock volts GTX1070 SC Intel SSD 
CoolingCoolingOSPower
Corsair H60 Corsair H55/Kraken G10 on GTX960 Mint Mate 18 Corsair AX1200i 
Case
Fractal Define S 
CPUOSCase
Intel Core i3 Ubuntu 14.04LTS Toshiba Satellite Ultrabook 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Video Game News
Overclock.net › Forums › Industry News › Video Game News › [Gamers Nexus] Gears of War 4 PC Graphics Settings Detailed & Async Compute Support