Overclock.net › Forums › Industry News › Video Game News › [WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD
New Posts  All Forums:Forum Nav:

[WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD - Page 5

post #41 of 799
Quote:
Originally Posted by looniam View Post

^ sure.
I bet you 45984587 internet points that nvidia will not use AMD hardware as a corollary/SLI/async compute solution.
Lil' Roy Taylor
(11 items)
 
  
Reply
Lil' Roy Taylor
(11 items)
 
  
Reply
post #42 of 799
Quote:
Originally Posted by GorillaSceptre View Post

? If nvidia choose to block it then there's nothing AMD can do.

Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.
post #43 of 799
I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.

Onto some comments.....

Quote:
Originally Posted by MoorishBrutha View Post

You do understand that Nvidia lied to their 980Ti customers about being fully-supported of DX12 while knowing they didn't have one of the main features of DX12- Async Compute?

Ashes of Singularity wasn't a fluke, it was a reason why AMD smoked Nvidia on those benchmarks.

Smoked? AMD caught up in AoTS, especially after a driver release from Nvidia.

Also, you might want to look at the different DX 12 levels, and what is being advertised, and what is considered required for the specific levels. Before you start accusing someone of knowingly advertising something they didn't have, which would be extremely illegal.

Libel, like what you are spreading, doesn't help the conversation at all.

Quote:
Originally Posted by huzzug View Post

I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?

Yes.

Although the amount of ASC that AoTS uses is pretty damn small, and not much of an indicator.

Quote:
Originally Posted by MoorishBrutha View Post

I think some Nvidia shill said so but alot, not just one, benchmarks saw bad performance from Nvidia due to the lack of Async Compute Engines in their GPUs. It's confirmed via White papers that none of Nvidia's cards (Maxwell, Kepler, Fermi) have Async Compute Engines built inside of them.

And Nvidia is already pressuring developers not to use Async Compute at all. The developers of Ashes of Singularity said they only used a modest amount of Async Compute and that the consoles are more intensive than they are with this feature.

For example, that game, Infamous: Second Son, for the PS4 already used Async Compute.

The million dollar question about PASCAL will be: does it have Async Compute Engines inside of it.

If it doesn't then Nvidia will be doomed.

Source on the bold part? Again, another claim of yours without anything to back it up.

The fact is, we don't know how big ASC is going to be, or who is even going to bother using it. A couple of console games, and a fractional use in an early Beta, are not something you can really make grand claims on.

Quote:
Originally Posted by DNMock View Post

Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.

Many, a lot, of developers wouldn't survive without the PC market to support their Console market.

You need to consider that Nvidia still completely controls the PC market with their products. If software developers suddenly abandoned that, they would quickly starve and go out of business. The PC gaming market is a huge market for developers, they can't dump it.

Is it unfortunate Nvidia has gotten that big? Yes, it is, as it makes it very difficult for AMD to gain ground in it. However, it is possible they will do it, but it is going to take time.
Edited by PostalTwinkie - 2/11/16 at 8:09am
    
CPUMotherboardGraphicsRAM
Intel i7 5820K AsRock Extreme6 X99 Gigabyte GTX 980 Ti Windforce OC 16 GB Corsair Vengeance LPX 
Hard DriveHard DriveCoolingOS
Samsung 840 EVO 250GB - HDD Speed Edtition Samsung SM951 512 GB - I still hate Samsung!  Noctua NHD14 Windows 10 
MonitorMonitorMonitorKeyboard
Achieva Shimian QH270-Lite Overlord Computer Tempest X27OC  Acer Predator XB270HU Filco Majestouch 2 Ninja 
PowerCaseMouseMouse Pad
Seasonic X-1250 Fractal Design R5 Razer Naga Razer Goliathus Alpha 
AudioAudio
AKG K702 65th Anniversary Edition Creative Sound Blaster Zx 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel i7 5820K AsRock Extreme6 X99 Gigabyte GTX 980 Ti Windforce OC 16 GB Corsair Vengeance LPX 
Hard DriveHard DriveCoolingOS
Samsung 840 EVO 250GB - HDD Speed Edtition Samsung SM951 512 GB - I still hate Samsung!  Noctua NHD14 Windows 10 
MonitorMonitorMonitorKeyboard
Achieva Shimian QH270-Lite Overlord Computer Tempest X27OC  Acer Predator XB270HU Filco Majestouch 2 Ninja 
PowerCaseMouseMouse Pad
Seasonic X-1250 Fractal Design R5 Razer Naga Razer Goliathus Alpha 
AudioAudio
AKG K702 65th Anniversary Edition Creative Sound Blaster Zx 
  hide details  
Reply
post #44 of 799
Quote:
Originally Posted by sugarhell View Post

I dont understnad what are you talking about? Dedicated a-sync card?

Aces feeds your shaders either with graphics works or compute works. Nvidia system can only feed with either graphics or compute and the pipeline needs to change everytime. Amd can feed for example half the shaders with graphics and half with compute works.

This happens inside the hardware. It's a hardware sheduler. A dedicated async card without the right configuration of shaders (for example like fermi the shaders on gcn can communicate ) is useless.

Yeah, of course it couldn't be used that way. But if the game is built so that x card could handle effects, and the other the rest, i don't see why it couldn't work. Similar to having a dedicated PhysX card.
post #45 of 799
Quote:
Originally Posted by DNMock View Post

Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.

Before the consoles released people expected that all multiplat games run better on AMD hardware because both next gen consoles are using AMD hardware. Yeah ... didn't happen.
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
current rig
(4 items)
 
  
CPUGraphicsGraphicsKeyboard
2500k R9 290X Tri-X GTX 680 Filco Majestouch 2 Brown Ninja 
  hide details  
Reply
post #46 of 799
Quote:
Originally Posted by PostalTwinkie View Post

I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.

Onto some comments.....
Smoked? AMD caught up in AoTS, especially after a driver release from Nvidia.

This isn't AMD's version of anything.. This is Async compute, it just happens that Nvidia's current architecture can't handle it, how you can equate that to Gameworks is beyond me..

Again, Aots hardly uses Async, it's a useless example.
post #47 of 799
Quote:
Originally Posted by GorillaSceptre View Post

Yeah, of course it couldn't be used that way. But if the game is built so that x card could handle effects, and the other the rest, i don't see why it couldn't work. Similar to having a dedicated PhysX card.

Split the workflow based on properties(compute,graphics rendering etc)? This is totally different

They want to use async because you can use all the shaders. For example when you do graphics pipeline you dont use all your shaders or when you do compute you dont use all your ROPs. With Async you can feed both of them so some shaders could do graphics and use the ROPs and the rest of the shaders could do putre compute(effects). With that way you stress your cards components to 100%.
Workstation
(4 items)
 
  
CPUMotherboardGraphicsMonitor
Xeon E5-2690 Supermicro 2011 Nvidia GP100/ Vega FE Dell ultrasharp 4k 
  hide details  
Reply
Workstation
(4 items)
 
  
CPUMotherboardGraphicsMonitor
Xeon E5-2690 Supermicro 2011 Nvidia GP100/ Vega FE Dell ultrasharp 4k 
  hide details  
Reply
post #48 of 799
Quote:
Originally Posted by GorillaSceptre View Post

There's many ways they could block it.. You can be an optimist, but when it comes to Nvidia, i'm more of a believe it when i see it person. They have proven time and time again that they don't like playing with others, what has changed?

many? like?

Quote:
Originally Posted by sugarhell View Post

I dont understnad what are you talking about? Dedicated a-sync card?

Aces feeds your shaders either with graphics works or compute works. Nvidia system can only feed with either graphics or compute and the pipeline needs to change everytime. Amd can feed for example half the shaders with graphics and half with compute works.

This happens inside the hardware. It's a hardware sheduler. A dedicated async card without the right configuration of shaders (for example like fermi the shaders on gcn can communicate ) is useless.
one of the "features" of DX12 is multi gpu systems right? that pretty much throws out a lot of multi gpu and memory pooling issues. since DX12 is the layer between the game and hardware why is that not able to schedule the graphics/compute scheduling?
Quote:
Originally Posted by p4inkill3r View Post

I bet you 45984587 internet points that nvidia will not use AMD hardware as a corollary/SLI/async compute solution.

i like round figures, how about a straight 46000000
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
loon 3.2
(18 items)
 
  
CPUMotherboardGraphicsRAM
i7-3770K Asus P8Z77-V Pro EVGA 980TI SC+ 16Gb PNY ddr3 1866 
Hard DriveHard DriveHard DriveOptical Drive
PNY 1311 240Gb 1 TB Seagate 3 TB WD Blue DVD DVDRW+/- 
CoolingCoolingOSMonitor
EKWB P280 kit EK-VGA supremacy Win X LG 24MC57HQ-P 
KeyboardPowerCaseMouse
Ducky Zero [blues] EVGA SuperNova 750 G2 Stryker M [hammered and drilled] corsair M65 
AudioAudio
SB Recon3D Klipsch ProMedia 2.1  
  hide details  
Reply
post #49 of 799
Look at that. When AMD works with developers instead of calling NVIDIA cheaters or whatever, their technology gets used in games and AMD users benefit.
First Build
(10 items)
 
  
CPUMotherboardGraphicsRAM
i3-4370 ASROCK H97M PR04 380x Nitro Corsair 8GB 
Hard DriveHard DriveHard DriveOS
Crucial M500 120GB Seagage 320GB Caviar Blue 1TB Windows 8.1 x86-64 
PowerCase
XFX 550W PS07B 
  hide details  
Reply
First Build
(10 items)
 
  
CPUMotherboardGraphicsRAM
i3-4370 ASROCK H97M PR04 380x Nitro Corsair 8GB 
Hard DriveHard DriveHard DriveOS
Crucial M500 120GB Seagage 320GB Caviar Blue 1TB Windows 8.1 x86-64 
PowerCase
XFX 550W PS07B 
  hide details  
Reply
post #50 of 799
Quote:
Originally Posted by GorillaSceptre View Post

This isn't AMD's version of anything.. This is Async compute, it just happens that Nvidia's current architecture can't handle it, how you can equate that to Gameworks is beyond me..

Again, Aots hardly uses Async, it's a useless example.

You might want to read the opening of the announcement again....
Quote:
Originally Posted by OP 
As the newest member of the AMD Gaming Evolved program.....


AMD's Gaming Evolved program is the same thing as Nvidia's program. It is a massive double standard. Taking it another step further, and highlighting that massive (and stupid) double standard, and using the logic just presented by yourself.

GameWorks: "This isn't Nvidia's version of anything, it just so happens AMD's current architecture can't handle it."

It makes OCN look hilariously bad when, in one thread, people are e-raging at Nvidia and their developer program, and then cheer on AMD and their developer program. Oh, and for clarity, I am talking about AMD's program as a whole, not just ASC.

EDIT:

More bits for the eyeballs from the article...
Quote:
That was no accident. With on-staff game developers, source code and effects, the AMD Gaming Evolved program helps developers to bring the best out of a GPU.

Literally the same thing Nvidia does.
Edited by PostalTwinkie - 2/11/16 at 8:19am
    
CPUMotherboardGraphicsRAM
Intel i7 5820K AsRock Extreme6 X99 Gigabyte GTX 980 Ti Windforce OC 16 GB Corsair Vengeance LPX 
Hard DriveHard DriveCoolingOS
Samsung 840 EVO 250GB - HDD Speed Edtition Samsung SM951 512 GB - I still hate Samsung!  Noctua NHD14 Windows 10 
MonitorMonitorMonitorKeyboard
Achieva Shimian QH270-Lite Overlord Computer Tempest X27OC  Acer Predator XB270HU Filco Majestouch 2 Ninja 
PowerCaseMouseMouse Pad
Seasonic X-1250 Fractal Design R5 Razer Naga Razer Goliathus Alpha 
AudioAudio
AKG K702 65th Anniversary Edition Creative Sound Blaster Zx 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel i7 5820K AsRock Extreme6 X99 Gigabyte GTX 980 Ti Windforce OC 16 GB Corsair Vengeance LPX 
Hard DriveHard DriveCoolingOS
Samsung 840 EVO 250GB - HDD Speed Edtition Samsung SM951 512 GB - I still hate Samsung!  Noctua NHD14 Windows 10 
MonitorMonitorMonitorKeyboard
Achieva Shimian QH270-Lite Overlord Computer Tempest X27OC  Acer Predator XB270HU Filco Majestouch 2 Ninja 
PowerCaseMouseMouse Pad
Seasonic X-1250 Fractal Design R5 Razer Naga Razer Goliathus Alpha 
AudioAudio
AKG K702 65th Anniversary Edition Creative Sound Blaster Zx 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Video Game News
Overclock.net › Forums › Industry News › Video Game News › [WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD