[Various] Ashes of the Singularity DX12 Benchmarks - Page 123 - Overclock.net - An Overclocking Community

Forum Jump: 

[Various] Ashes of the Singularity DX12 Benchmarks

 
Thread Tools
post #1221 of 2682 (permalink) Old 08-29-2015, 05:53 PM
Linux Lobbyist
 
Mahigan's Avatar
 
Join Date: Aug 2015
Location: Ottawa, Canada
Posts: 1,748
Rep: 874 (Unique: 233)
Quote:
Originally Posted by Kollock View Post

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

Thank you very much for the clarifications. I look forward to playing your game. It's been a long time since a good RTS title was released. Keep up the good work smile.gif

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." - Arthur Conan Doyle (Sherlock Holmes)
Mahigan is offline  
Sponsored Links
Advertisement
 
post #1222 of 2682 (permalink) Old 08-29-2015, 06:12 PM
Linux Lobbyist
 
semitope's Avatar
 
Join Date: Jul 2013
Location: Florida/Jamaica
Posts: 536
Rep: 32 (Unique: 20)
Quote:
Originally Posted by black96ws6 View Post

Yeah but Nvidia actually LOSES performance in DX12, that makes no sense at all.

I look at it this way -

Serial (DX11)
Nvidia - Has 1 guy who has to carry a large rock 100 yards. Trains a lot for this.
AMD - Has 1 guy who has to carry a large rock 100 yards. Doesn't train as much as Nvidia's guy.

Result: Nvidia's guy is faster.

Parallel (DX12)
Nvidia - Still has 1 guy who has to carry a large rock 100 yards.
AMD - Now has 2 guys who carry a large rock 100 yards.

Result: AMD's team is now on par with Nvidia's guy.

That just doesn't make sense from an Nvidia point of view. It's great that AMD's arch works better with DX12, that's great for all of us. But, Nvidia shouldn't LOSE performance just because DX12 is more efficient. That makes no sense. If anything they should GAIN, or at least stay the same.

Even if their current architecture causes DX12 to have to wait for Nvidia's GPU while the AMD guys are passing them, it's still not going to be SLOWER than DX11. Their single guy is still going to carry that rock just as fast. So they don't have 2 guys to carry the rock, so what? But if it's slower or even the same, that would mean DX12 is a complete failure for Nvidia cards, which again, doesn't make sense. It's supposed to be faster for everyone.

I realize this is an extremely simple scenario but hopefully you get the point I'm trying to make.

if you think about it in a simplistic way. This is supposed to be more efficient so it should be faster.

A way to look at it could be to think of AMD and Nvidia as building amphibious cars. Previously these cars had to drive on a windy road and nvidia built theirs to take on that road really well. Then it turns out that there is a faster straight path across a body of water and AMD having built their car antipating this put much more effort into being able to take on water. They reach the water and nvidia's super car is slower.

Basically its a different challenge and it may just not be as good at it as the previous. The body of water may be a more direct and faster path IF the vehicle was built right. Maybe nvidias current cards won't be happy at all with dx12 and dx11 would be recommended for them till later on.
semitope is offline  
post #1223 of 2682 (permalink) Old 08-29-2015, 06:17 PM
mfw
 
ToTheSun!'s Avatar
 
Join Date: Jul 2011
Location: Terra
Posts: 6,141
Rep: 360 (Unique: 189)
Quote:
Originally Posted by Mahigan View Post

I feel vindicated from all the hate mail I've received.
Keep doing God's work!

CPU
Intel 6700K
Motherboard
Asus Z170i
GPU
MSI 2080 Sea Hawk X
RAM
G.skill Trident Z 3200CL14 8+8
Hard Drive
Samsung 850 EVO 1TB
Hard Drive
Crucial M4 256GB
Power Supply
Corsair SF600
Cooling
Noctua NH C14S
Case
Fractal Design Core 500
Operating System
Windows 10 Education
Monitor
ViewSonic XG2703-GS
Keyboard
Cooler Master Quickfire TK
Mouse
Corepadded Logitech G703
Mousepad
Cooler Master MP510
Audio
Fiio E17K v1.0 + Beyerdynamic DT 1990 PRO (B pads)
▲ hide details ▲
ToTheSun! is online now  
Sponsored Links
Advertisement
 
post #1224 of 2682 (permalink) Old 08-29-2015, 06:35 PM
New to Overclock.net
 
HalGameGuru's Avatar
 
Join Date: Aug 2015
Location: Houston, TX
Posts: 25
Rep: 12 (Unique: 4)
Quote:
Originally Posted by PostalTwinkie View Post

doh.gif

AdaptiveSync is the open standard that is "free". FreeSync is AMD's closed and proprietary implementation with their own closed/private validation process. FreeSync itself is anything but open and free, there are still costs associated to it. AMD just doesn't charge a licensing fee from themselves to display their "FreeSync" logo. They claim "free" because they simply ignore all the costs incurred by the manufacturer to get the "FreeSync Approved" stamp.

Your statements really don't mean much.

Any monitor manufacturer can make a freesync supporting monitor, if it has adaptive sync and an AMD GPU can make use of it freesync will work. Freesync is merely the AMD implementation of adaptive sync, which they pushed to have included in the VESA spec rather than push for a bespoke piece of hardware. Anyone can make use of Adaptive Sync, FreeSync is merely what adaptive sync is called when an AMD GPU is making use of it. There is no added cost for a monitor manufacturer to put out a monitor that is DP Spec compliant that will work with an AMD GPU under FreeSync and Intel's future with adaptive sync will only push the tech further and make it more ubiquitous and inexpensive.
https://techreport.com/news/28865/intel-plans-to-support-vesa-adaptive-sync-displays#metal

TressFX and Mantle stand on their own. And both have media history on their accessibility and the avenues left open to the other manufacturers as to their implementation.

I'm seeing a lot of inductive reasoning going on taken with quite a lot of acceptance, but, AMD's history of putting out spec's and tech's that the industry as a whole can make use of is the bridge too far?

HalGameGuru is offline  
post #1225 of 2682 (permalink) Old 08-29-2015, 07:06 PM
New to Overclock.net
 
Noufel's Avatar
 
Join Date: Apr 2012
Location: Constantine, Algeria
Posts: 1,548
Rep: 54 (Unique: 40)
Quote:
Originally Posted by Kollock View Post

Quote:
Originally Posted by Forceman View Post

Wait, so all this analysis and conclusions about how async compute is going to make AMD's architecture the better one in the future, and this benchmark doesn't even use async compute on the Nvidia side?

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.
Thanks for posting here on ocn smile.gif
I have a question that bothers me : why the gains on the furyX aren't the same as the ones on the 390X with dx12 ?
Noufel is offline  
post #1226 of 2682 (permalink) Old 08-29-2015, 07:20 PM
You broke it!
 
PostalTwinkie's Avatar
 
Join Date: Apr 2012
Location: U.S.A
Posts: 14,194
Rep: 1090 (Unique: 561)
Quote:
Originally Posted by caswow View Post

what methodology do devs need to use if they want to make use of nvidias "advanced" architecture? and i mean something really usefull not overtesslation? and what segmentation do you mean? dont you think nvidia will implement more async compute in their next architecture to boost perf? because i think i know what direction you are heading...

Well, that is the possible concern. As it appears now, Nvidia isn't invested heavily into this Async Compute situation. We don't know what Pascal will be yet, so we can't say what they are going to do. However, it has been argued that developers will have more control over the actual performance of the game, and less control is given to the actual GPU manufacturer. So if Nvidia decides they want to take another approach to it, whatever those options might be (who knows), we could have two very different philosophies. Maybe even more so than now...

Why that could all be a potential concern is that if AMD goes heavy support Async Compute, and Nvida does XYZ SomethingForUs, you now have developers with two very clear paths. Do they have the funding to support full development and optimization for both of those unique paths? Did Nvidia, with their trucks of cash, flat buy out a developer?

If Nvidia and AMD can't make huge impacts with drivers, what happens if two clear paths emerge and a developer takes just one? This isn't even a path of two different APIs going to war, but different paths within a single API.

It leaves an extreme amount of room for developer bias. If DX12 locks out the GPU manufacturer as much as some claim in terms of performance. We think we see heavy bias now in games, I can't imagine what it would look like if a developer didn't give equal treatment, and the left out party couldn't make extreme driver improvements on their own.

Quote:
Originally Posted by HalGameGuru View Post

Any monitor manufacturer can make a freesync supporting monitor, if it has adaptive sync and an AMD GPU can make use of it freesync will work. Freesync is merely the AMD implementation of adaptive sync, which they pushed to have included in the VESA spec rather than push for a bespoke piece of hardware. Anyone can make use of Adaptive Sync, FreeSync is merely what adaptive sync is called when an AMD GPU is making use of it. There is no added cost for a monitor manufacturer to put out a monitor that is DP Spec compliant that will work with an AMD GPU under FreeSync and Intel's future with adaptive sync will only push the tech further and make it more ubiquitous and inexpensive.
https://techreport.com/news/28865/intel-plans-to-support-vesa-adaptive-sync-displays#metal

TressFX and Mantle stand on their own. And both have media history on their accessibility and the avenues left open to the other manufacturers as to their implementation.

I'm seeing a lot of inductive reasoning going on taken with quite a lot of acceptance, but, AMD's history of putting out spec's and tech's that the industry as a whole can make use of is the bridge too far?

Actually AMD has their own validation requirements and tests they run specific to FreeSync, as Freesync is specific to AMD. What the default DP spec for AdaptiveSync does isn't enough for FreeSync to work as FreeSync is marketed. It requires a hell of a lot of R&D and tuning to get done; Nixeus has commented on this heavily.

So while Intel picking up AdaptiveSync and going with their own VRR will be great, it won't really directly impact FreeSync specifically. As it is an entirely separate product/offering/process.

EDIT:

You have the umbrella of VRR. Under that you have the different offerings.

  • G-Sync
  • FreeSync
  • In-Sync* (As refereed to here on OCN).
PostalTwinkie is offline  
post #1227 of 2682 (permalink) Old 08-29-2015, 07:26 PM
New to Overclock.net
 
sirroman's Avatar
 
Join Date: May 2013
Posts: 6
Rep: 0
Quote:
Originally Posted by black96ws6 View Post

Yeah but Nvidia actually LOSES performance in DX12, that makes no sense at all.

I look at it this way -

Serial (DX11)
Nvidia - Has 1 guy who has to carry a large rock 100 yards. Trains a lot for this.
AMD - Has 1 guy who has to carry a large rock 100 yards. Doesn't train as much as Nvidia's guy.

Result: Nvidia's guy is faster.

Parallel (DX12)
Nvidia - Still has 1 guy who has to carry a large rock 100 yards.
AMD - Now has 2 guys who carry a large rock 100 yards.

Result: AMD's team is now on par with Nvidia's guy.

That just doesn't make sense from an Nvidia point of view. It's great that AMD's arch works better with DX12, that's great for all of us. But, Nvidia shouldn't LOSE performance just because DX12 is more efficient. That makes no sense. If anything they should GAIN, or at least stay the same.

Even if their current architecture causes DX12 to have to wait for Nvidia's GPU while the AMD guys are passing them, it's still not going to be SLOWER than DX11. Their single guy is still going to carry that rock just as fast. So they don't have 2 guys to carry the rock, so what? But if it's slower or even the same, that would mean DX12 is a complete failure for Nvidia cards, which again, doesn't make sense. It's supposed to be faster for everyone.

I realize this is an extremely simple scenario but hopefully you get the point I'm trying to make.

Nvidia driver replaces shaders built to work at parallel with serial shaders.
sirroman is offline  
post #1228 of 2682 (permalink) Old 08-29-2015, 07:33 PM
New to Overclock.net
 
HalGameGuru's Avatar
 
Join Date: Aug 2015
Location: Houston, TX
Posts: 25
Rep: 12 (Unique: 4)
Quote:
To take advantage of the benefits of AMD FreeSync™ technology, users will require: a monitor compatible with DisplayPort Adaptive-Sync, a compatible AMD Radeon™​ GPU with a DisplayPort connection, and a compatible AMD Catalyst™ graphics driver.

– Project FreeSync will utilize DisplayPort Adaptive-Sync protocols to enable dynamic refresh rates for video playback, gaming and power-saving scenarios.

According to AMD FreeSync will work anywhere AdaptiveSync is available and to spec. Certification not withstanding.

All the sources I have read claim parity. Aside from certification, which is not necessary for the VRR tech to work. FreeSync uses what is in AdaptiveSync, if its standards compliant FreeSync will work with it. And anyone else can do the same.

HalGameGuru is offline  
post #1229 of 2682 (permalink) Old 08-29-2015, 07:57 PM
New to Overclock.net
 
Join Date: Jul 2013
Location: Purgatory
Posts: 2,280
Rep: 125 (Unique: 82)
Quote:
Originally Posted by CrazyElf View Post

Warning: Spoiler! (Click to show)
As the other poster has indicated.

The issue is that Nvidia had access to the DX12 code for over a year.
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
This would suggest to me that Nvidia knew what was coming and there hasn't be excessive favoritism here and Nvidia even had the opportunity to contribute to improve performance for their hardware.



What Mahigan is saying is that historically, Nvidia has relied heavily on driver based optimizations. That has paid handsome dividends for DX11 performance. However the way they have designed their architecture - serial heavy, means that it will not do as well on DX12, where it more parallel intensive.

The other of course is that there is a close relationship between Mantle, compared with DX12 and Vulkan. AMD must have planned this together and built their architecture around that, even sacrificing DX11 performance (less money spent on DX11 drivers). In other words, if Mahigan's hypothesis is right, they played the long game.
Same here. I would like a bigger sample size to draw a definitive conclusion. See my response to Provost below for my full thoughts - I think that Mahigan's hypothesis is probable, but there are some mysteries.
The full review on TPU
https://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/

I suppose there's process of elimination. What is the Bulldozer/Steamroller architecture very weak at? Well there's raw single threaded performance and the module design isn't good at floating point, but there's got to be something specific.

The question is, what communicates between the GPU and CPU? That may be a good place to start. Another may be, what has Intel done decisively better?
+Rep

This is basically where we are at:
  • We know that something is causing the DX12 leap in AMD's arch. We don't know what, but Mahigan's hypothesis is the design of AMD's architecture, which they optimized around for DX12, perhaps at the expense of DX11.
  • At the moment, AMD is at a drawback and needs that market/mind-share. Combined with GCN consoles, they may have narrowed the gap in their ability to drive future games development.
  • The opportunity for driver based optimizations is far more limited in DX12, due to it's "close to metal" nature.
  • Nvidia can and will catch up. They have the money and mindshare to do so. The question is when? Pascal? Or is it very compute centric, in which case they may go with Volta.

I would agree that there hasn't been any well researched, well thought out alternative hypothesis. That is not to say that Mahigan's ideas are infallible - they are not, as we still do not have a conclusive explanation as to why the Fury X does not scale very well (and apparently a second mystery now - the AMD CPU's poor performance). Left unresolved, that may require a substantial modification to any hypothesis. Personally I accept that it's the most probable explanation right now.

I think that in the short term, this may help stem the tide for AMD, perhaps a generation or maybe two. But in the long run, they still are at a drawback. They have been cutting R&D money for GPUs and focusing mostly on Zen for example. AMD simply does not have the kind of money to spend. Nvidia is outspending them. In the long run, I fear there will be a reversal if they cannot come up with something competitive.

For AMD though, it's very important they figure out what is the problem, because they need to know where the transistor budget should go for the next generation (although admittedly, if the rumors are true, it's already taped out - it's important to keep in mind that GPUs are designed years in advance).




Remember everyone - it's best to keep 2 GPU vendors that are very competitive with each other. That's when the consumer wins. We want the best performance for a competitive price. For that reason, I'm hoping that AMD actually wins the next GPU round - and that Zen is a success (IMO, Intel monopoly is also bad for us). A monopoly is a lose for us.

A lot of good posts and a ton of information in this thread, even since the last time I checked in . Great contribution to the community.

You raise a good point about Intel. We have been taking about two players in this game while ignoring the one player who is truly the 800 pound gorilla in the desktop pc market.
The gains from Dx12 by lowering the CPU head is really a zero-sum game; benefit for Nvidia and AMD comes at the expense of Intel, doesn't it... Lol

Unless Intel doesn't care about the consumer pc market at all, one would think that there ought to be a reaction from Intel to defend its turf. And, whether that action (or lack thereof) is defensive or offensive , would tell us a lot about how the pc gaming industry evolves over the next few years.

Quote:
Originally Posted by PhantomTaco View Post

I hate to double post but I wanted to thank you for posting this. It is probably the single most illuminating thing I've read on the subject to date. For the record I don't believe a single engine is ever a good measure of anything. Yes UE4 and it's prior iterations are among the most popularly used engines, but like you said they definitely have had a bias, and I wouldn't call a single benchmark from a single engine alone enough to draw any conclusions, regardless of whose engine it is. Your agreement with AMD is what keeps me questioning, and like I said in the last post I put up, I don't mark it as a sign of foul play or otherwise, merely something that draws questions. What I'd honestly love is a post from an NVIDIA representative to talk more from their perspective on these performance numbers, as well as more input from them as other DX12 enabled games launch (such as Ark) to better explain the choices they made at a hardware level and what their impact is.

Don't you think if Nvidia had a substantive counter argument to make it would have already put forward such an argument by getting out in front of this debate, especially given its marketing savvy and depth of the PR machine?

The fact that has not happened in itself lends credence to mahigan's arguments, even if it's at a basic intuitive level.

Simplicity
provost is offline  
post #1230 of 2682 (permalink) Old 08-29-2015, 08:11 PM
New to Overclock.net
 
Remij's Avatar
 
Join Date: Apr 2012
Posts: 574
Rep: 44 (Unique: 27)
Quote:
Originally Posted by provost View Post

Don't you think if Nvidia had a substantive counter argument to make it would have already put forward such an argument by getting out in front of this debate, especially given its marketing savvy and depth of the PR machine?

The fact that has not happened in itself lends credence to mahigan's arguments, even if it's at a basic intuitive level.

No. It's too early. People aren't running out and buying AMD gpus at breakneck speeds based off this one benchmark. The people who are claiming this early victory for AMD are more than likely AMD fanboys. I remember well when Mantle was gonna destroy Nvidia in games that supported both Mantle and DX11, and we saw how that turned out. Nvidia's already said to expect the same thing that happened with DX11 to happen with DX12.

But I'm sure it will come full circle. In the near future once DX12 is out and Nvidia is ahead again, people will cite all the technical reasons why it shouldn't be so and claim Nvidia sabotages their competitors performance with proprietary features/code and their stranglehold on the market..



It would be cool to see AMD smash the hell out of Nvidia and show them they aren't invincible, but even these early tests aren't painting that picture, so I wouldn't expect it, but would rather be pleasantly surprised if it does happen.

Remij is offline  
Closed Thread

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off