Overclock.net › Forums › Graphics Cards › Graphics Cards - General › Why is nobody concerned about 1080 negative scaling in DX12?
New Posts  All Forums:Forum Nav:

Why is nobody concerned about 1080 negative scaling in DX12? - Page 2

post #11 of 24
I have been thinking for a while that this isn't really about Nvidia, we talk about it like it is but I think this is all an AMD thing, I'll explain.

We all know Nvidia shows vastly lower driver overhead and it benefits some games, in addition it seems to need less driver work to make it perform better performance on day 1 but also older games is more consistently good. AMD also has better raw stats, more raw compute more bandwidth and yet for reasons we often don't understand they perform worse than Nvidia and then a month later we get a driver release and its even again.I think what is happening here is that AMDs designed API and pushing the parallel behaviour for the GPU onto the developers has helped them utilise their cards a lot better. Suddenly that raw performance advantage they have is starting to actually get used closer to optimal and performance is thus shooting up in DX12. Nvidia on the other hand is finding switching calculation is just costing it time, they don't need it to run well and actually it reduces their performance as the core context switches.

So really AMD DX12 performance is mostly something AMD needs to make their architecture shine and its why they gain such considerably performance from it. Nvidia not needing it and getting good utilisation from their GPU on the other hand finds the parallel work is a marginal disadvantage which is something we see a lot in parallel computation its less efficient than a good serial implementation. DX12 is an AMD thing, its a big boost to their architecture, its not an Nvidia problem its a solution to AMDs problem.
BC Primary
(20 items)
 
  
CPUMotherboardGraphicsRAM
6700k Gigabyte Gaming 7 MSI 1080 Gaming X 16GB Corsair 3200 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 4TB Crucial MX100 512GB Crucial M4 512GB Samsung 950 pro 
CoolingOSMonitorMonitor
Custom watercooling 2xMCR320 1xMCR480 Windows 10 Asus Rog Swift PG279Q Benq XL2411T 
KeyboardPowerCaseMouse
Corsair K70 Red cherry Corsair HX1050 Little Devil 7 Zowie Evo CL EC2 
Mouse PadAudioAudioAudio
QCK Heavy Soundblaster ZX Sennheiser HD598 Schitt Magni/Mobi 
  hide details  
Reply
BC Primary
(20 items)
 
  
CPUMotherboardGraphicsRAM
6700k Gigabyte Gaming 7 MSI 1080 Gaming X 16GB Corsair 3200 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 4TB Crucial MX100 512GB Crucial M4 512GB Samsung 950 pro 
CoolingOSMonitorMonitor
Custom watercooling 2xMCR320 1xMCR480 Windows 10 Asus Rog Swift PG279Q Benq XL2411T 
KeyboardPowerCaseMouse
Corsair K70 Red cherry Corsair HX1050 Little Devil 7 Zowie Evo CL EC2 
Mouse PadAudioAudioAudio
QCK Heavy Soundblaster ZX Sennheiser HD598 Schitt Magni/Mobi 
  hide details  
Reply
post #12 of 24
Quote:
Originally Posted by Randomdude View Post

http://www.overclock-and-game.com/news/pc-gaming/46-gtx-1080-what-s-not-being-discussed

The 1080 (and thus the 1070 as well) is scaling negatively in DX12 workloads, or performing within margin of error compared to DX11. Basically, it is... just a refurbished Maxwell?

I'm curious why there isn't more attention given to this. Every single review site out there also is using Tomb Raider (http://www.overclock-and-game.com/news/pc-gaming/43-rise-of-the-tomb-raider-fury-x-benchmarks?showall=&start=4) as their DX12 benchmark which is a bit suspicious, there are other DX12 games as well and gains should be seen in all of them, however that is not the case - it's obviously fishy, I saw some people suggest that review manuals were given to these sites and it wouldn't surprise me if that was the case, as all this journalism has done is show the best the card has to offer, which isn't what journalism is about, is it? I thought it's about giving the full picture.

Kana-Maru has been a great inspiration for this topic with many well-thought out contributions that I believe deserve some more attention.

All-in-all a very shady release, with very shady people flooding the forum, with many half-truths, just rings the alarm bells for me. Hard. If you think this makes me a fanboy, I actually am trying to help you as well as myself. This card seems to be even worse than the 680 and I mean come on, you all know how that went?

The 680 was the best card when it came out as well smile.gif
Low level graphic APIs are harder to use but like say C vs C# offer much better control and performance, problem is you can also ruin the advantages and do it all wrong and get worse performance than DX11. So while being better performance wise they are better for skilled programmers that can harvest the extra power and write more efficient application, and less so for copy paste coders that have no idea that they have just used 8GB VRAM on a 4GB GPU, because who cares right when you are copy pasting console crap from 8GB GPU into PC version that has a wide variety of GPUs and a bit more effort has to be put into how they are used as there is no more driver/AMD/NV holding your hands to manage VRAM etc. like there is DX11 where AMD/NV have to publish optimized drivers to fix in driver what the game devs messed up in their game.

Of course Nvidia is using brute force to run DX12, they have always used brute force in every approach as far as I know for decades. They only jump ship when it's necessary, otherwise just milk and ride the wave plus release proprietary stuff to corner down the market into their hand.
post #13 of 24
Have we determined that async compute is the only component of DX12? I thought there was much more to DX12 than just async compute, but seems like every thread about DX12 boils down to only the implementation of async compute.
X-Rig
(14 items)
 
Death Bomb
(9 items)
 
 
CPUMotherboardGraphicsGraphics
I7-5960X Asus Rampage V Extreme 1502 EVGA Titan X EVGA Titan X 
RAMHard DriveHard DriveOptical Drive
G.Skill F4-3000C15Q-32GRK Samsung SM951 M.2 Samsung 850 Pro 512GB Pioneer BDR-2209 
CoolingOSMonitorKeyboard
EKWB Blocks, Fans, and Pumps, HW Labs Black Ice... Windows 10 Pro Retail BenQ BL3201PH Max Keyboard Nighthawk X8 
PowerMouse
EVGA 220-P2-1200-X1 Roccat Kone Pure Optical 
CPUMotherboardGraphicsRAM
I7-7700K De-lidded @ 5GHz Asus Strix Z270i Nvidia Titan Xp G. Skill F4-3600C15D-16GTZ 
Hard DriveCoolingOSPower
2X Samsung 960 Pro 512GB M.2 EK Supremacy EVO CPU. Titan Xp Full Cover w/Ba... Windows 10 Pro 64 Bit SeaSonic X750 
Case
Fractal Nano S 
  hide details  
Reply
X-Rig
(14 items)
 
Death Bomb
(9 items)
 
 
CPUMotherboardGraphicsGraphics
I7-5960X Asus Rampage V Extreme 1502 EVGA Titan X EVGA Titan X 
RAMHard DriveHard DriveOptical Drive
G.Skill F4-3000C15Q-32GRK Samsung SM951 M.2 Samsung 850 Pro 512GB Pioneer BDR-2209 
CoolingOSMonitorKeyboard
EKWB Blocks, Fans, and Pumps, HW Labs Black Ice... Windows 10 Pro Retail BenQ BL3201PH Max Keyboard Nighthawk X8 
PowerMouse
EVGA 220-P2-1200-X1 Roccat Kone Pure Optical 
CPUMotherboardGraphicsRAM
I7-7700K De-lidded @ 5GHz Asus Strix Z270i Nvidia Titan Xp G. Skill F4-3600C15D-16GTZ 
Hard DriveCoolingOSPower
2X Samsung 960 Pro 512GB M.2 EK Supremacy EVO CPU. Titan Xp Full Cover w/Ba... Windows 10 Pro 64 Bit SeaSonic X750 
Case
Fractal Nano S 
  hide details  
Reply
post #14 of 24
Quote:
Originally Posted by GnarlyCharlie View Post

Have we determined that async compute is the only component of DX12? I thought there was much more to DX12 than just async compute, but seems like every thread about DX12 boils down to only the implementation of async compute.

Probably because so far this is only thing that made noticeable difference with new API. Only DX12 games with asyc saw some improvements in performance, at least on AMD cards. DX12 games without async like Rise of the Tomb Rider are gaining nothing from DX12, performance is even a bit lower on DX12 vs DX11. After all of this hype people want to see actual improvements from DX12, and so far only async delivered any serious gains, so people are seeing it as main feature atm.
post #15 of 24
Quote:
Originally Posted by Krzych04650 View Post

Probably because so far this is only thing that made noticeable difference with new API. Only DX12 games with asyc saw some improvements in performance, at least on AMD cards. DX12 games without async like Rise of the Tomb Rider are gaining nothing from DX12, performance is even a bit lower on DX12 vs DX11. After all of this hype people want to see actual improvements from DX12, and so far only async delivered any serious gains, so people are seeing it as main feature atm.

Then it only shows any real improvement on AMD.

But then we could have predicted this with DX12 actually, most games were below the draw call limitation and hence were GPU limited so reducing the cost and overhead of those draw calls made no difference at all. Even those games that are CPU limited regularly aren't limited by the DirectX draw call overhead but rather their own game logic, so the end result is the those games that actually benefit is much smaller than perhaps people were led to believe. We have seen draw call limited games but they aren't all that common, its mostly GPU limited games and occasionally poorly written CPU limited games where the game logic dominates (Arma 3, Project cars etc etc).
BC Primary
(20 items)
 
  
CPUMotherboardGraphicsRAM
6700k Gigabyte Gaming 7 MSI 1080 Gaming X 16GB Corsair 3200 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 4TB Crucial MX100 512GB Crucial M4 512GB Samsung 950 pro 
CoolingOSMonitorMonitor
Custom watercooling 2xMCR320 1xMCR480 Windows 10 Asus Rog Swift PG279Q Benq XL2411T 
KeyboardPowerCaseMouse
Corsair K70 Red cherry Corsair HX1050 Little Devil 7 Zowie Evo CL EC2 
Mouse PadAudioAudioAudio
QCK Heavy Soundblaster ZX Sennheiser HD598 Schitt Magni/Mobi 
  hide details  
Reply
BC Primary
(20 items)
 
  
CPUMotherboardGraphicsRAM
6700k Gigabyte Gaming 7 MSI 1080 Gaming X 16GB Corsair 3200 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 4TB Crucial MX100 512GB Crucial M4 512GB Samsung 950 pro 
CoolingOSMonitorMonitor
Custom watercooling 2xMCR320 1xMCR480 Windows 10 Asus Rog Swift PG279Q Benq XL2411T 
KeyboardPowerCaseMouse
Corsair K70 Red cherry Corsair HX1050 Little Devil 7 Zowie Evo CL EC2 
Mouse PadAudioAudioAudio
QCK Heavy Soundblaster ZX Sennheiser HD598 Schitt Magni/Mobi 
  hide details  
Reply
post #16 of 24
Quote:
Originally Posted by BrightCandle View Post

We all know Nvidia shows vastly lower driver overhead and it benefits some games, in addition it seems to need less driver work to make it perform better performance on day 1 but also older games is more consistently good. AMD also has better raw stats, more raw compute more bandwidth and yet for reasons we often don't understand they perform worse than Nvidia and then a month later we get a driver release and its even again.

Is it really a month nowadays? With Doom AMD released drivers on Day 1 and there was a issue and it took 1 business day for a driver fix to release. AMD driver support has been in overdrive since early-mid 2015. Speaking of Doom AMD still only runs OGL 4.3 while Nvidia runs 4.5. Still waiting on Vulkan support to drop any day now.
Quote:
So really AMD DX12 performance is mostly something AMD needs to make their architecture shine and its why they gain such considerably performance from it. Nvidia not needing it and getting good utilisation from their GPU on the other hand finds the parallel work is a marginal disadvantage which is something we see a lot in parallel computation its less efficient than a good serial implementation.

No DX12 isn't a AMD thing. As usual AMD pushed the market forward while others would have loved to keep PC gaming stagnant. If it's an "AMD thing" then you can take that champion belt away from Intel and award that to AMD since they have been innovating and pushing a lot of our standard tech forward for many years. DX11 was a very limited API and MS tried to update and add some sort of multi-core support. Developers used hacks to help and only the best of the best could utilize many cores, but even then DX11 draw call limit was a issue.

DX12 removes that issues AND allows parallel workloads. Nvidia architecture has benefiting from DX11 serial-like work loads. Nvidia is still living out their DX11 low 1080p overhead supremacy, while not winning in every case, Nvidia still had good DX11 drivers. That's not saying that AMD didn't work hard to stay competitive and actually beat Nvidia in many cases. You can think of Nvidia architecture as "old" and AMD architecture back in 2012 as "future proof" and that really shows with the great improvements the 7970 and 290X has seen over the past 3-4 years. Now Nvidia latest and greatest architecture is nice thanks to a lot of difference factors, but people writing off Nvidia's DX12 is laughable. Nvidia has plenty of money to solve the issue, but why should they if gamers continue to defend their every move and give them a way out.

Quote:
DX12 is an AMD thing, its a big boost to their architecture, its not an Nvidia problem its a solution to AMDs problem.

Ok. Nvidia answer to DX12 is "brute force" as I've explained in my article. Seeing a Fury [non X] lose to a highly stock clocked GTX 1080 by 2.18% is hilarious, but as they continue to say, they have everything under control, just trust them.
If you want to defend Nvidia and say that DX12 isn't an "Nvidia problem" then have fun throttling or paying for premium prices to keep the thermals in check. There are many way Nvidia could tackle DX12\Vulkan with all of the money they have for R&D. DX12 isn't new either. Nvidia DX12 ads are all over the place and we were expecting this at least a year in a half before release. Maybe Nvidia wasn't prepared for AMD performance gains in DX12.


Quote:
Originally Posted by BrightCandle View Post

Then it only shows any real improvement on AMD.

But then we could have predicted this with DX12 actually, most games were below the draw call limitation and hence were GPU limited so reducing the cost and overhead of those draw calls made no difference at all. Even those games that are CPU limited regularly aren't limited by the DirectX draw call overhead but rather their own game logic, so the end result is the those games that actually benefit is much smaller than perhaps people were led to believe. We have seen draw call limited games but they aren't all that common, its mostly GPU limited games and occasionally poorly written CPU limited games where the game logic dominates (Arma 3, Project cars etc etc).

That's a good point as well. Poor logic will ensure a game falls below the draw call threshold. Poor coding can definitely diminish performance and then when you started adding things like black-boxes [Gameworks] that makes things worse and adds a route for a specific GPUs or a specifically named architecture. Now that game development tools and engines like Unreal and CryEngine 5 are supporting Vulkan & DX12 hopefully we see better games. That still won't help the terrible people from butchering the coding, but hopefully there are more well coded games than terrible coded titles.
Edited by Kana-Maru - 5/31/16 at 10:31pm
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
post #17 of 24
It is actually a pretty big problem. Pascal is basically trying to brute force its way though this. I don't think that these GPUs will age well. I've written elsewhere about this one:
http://www.overclock.net/t/1601496/is-the-founders-edition-gtx-1080-a-terrible-value-for-most-people-arguably-worse-than-the-gtx-680-and-gtx-980/0_100#post_25207585

Even without DX 12 consider the following
  • The 7970 has aged a lot better than the GTX 680
  • The 290X has aged a lot better than the large Keplers
  • The Hawaii cores have made relative gains to the GTX 980/970 and the Fury X versus the 980Ti/Titan X

The big reason why DX12 is not doing so well is because of the lack of a hardware scheduler. Nvidia used to have one in Fermi, but discarded it in Kepler and did not add one in Maxwell or Pascal. By contrast, AMD has kept one in the GCN GPUs.

Over the next year or two, I expect to see the performance of Maxwell follow a similar trajectory as to Kepler against GCN.

This is the front end controller of AMD's Hawaii and Fiji GPUs:


The hardware scheduler or Command Processor is the big reason, combined with the ACEs as to why the AMD GCN GPUs can scale well with DX12. Nvidia's strategy is more power efficient for DX11 and saves die space (allowing for more of everything else), but it doesn't work as well in the DX12 world.

This table from Ext3h sums it up:



We will not see a truly parallel Nvidia GPU until Volta, which may not be until 2018, perhaps later for the "large" die.

I"m thinking that we should wait until Vega comes out to see how this goes. It could be that AMD has messed up, but I think there's also a chance that they may win this round in a way not seen since Cypress. The real battle will be at 4k, max details, with Async enabled on Vega vs the large Pascal IMO.
Quote:
Originally Posted by Randomdude View Post

The 680 was the best card when it came out as well smile.gif

Although it was considered inferior, the 7970 has proven to be a vastly better card.

Actually, now that Crossfire Frame Pacing is up, I'd argue that a 7970 CF is going to also run away from the GTX 680 SLI.
Edited by CrazyElf - 5/31/16 at 5:27pm
Trooper Typhoon
(20 items)
 
  
CPUMotherboardGraphicsGraphics
5960X X99A Godlike MSI 1080 Ti Lightning MSI 1080 Ti Lightning 
RAMHard DriveHard DriveHard Drive
G.Skill Trident Z 32 Gb Samsung 850 Pro Samsung SM843T 960 GB Western Digital Caviar Black 2Tb 
Hard DriveOptical DriveCoolingCooling
Samsung SV843 960 GB LG WH14NS40 Cryorig R1 Ultimate 9x Gentle Typhoon 1850 rpm on case 
OSMonitorKeyboardPower
Windows 7 Pro x64 LG 27UD68 Ducky Legend with Vortex PBT Doubleshot Backlit... EVGA 1300W G2 
CaseMouseAudioOther
Cooler Master Storm Trooper Logitech G502 Proteus Asus Xonar Essence STX Lamptron Fanatic Fan Controller  
  hide details  
Reply
Trooper Typhoon
(20 items)
 
  
CPUMotherboardGraphicsGraphics
5960X X99A Godlike MSI 1080 Ti Lightning MSI 1080 Ti Lightning 
RAMHard DriveHard DriveHard Drive
G.Skill Trident Z 32 Gb Samsung 850 Pro Samsung SM843T 960 GB Western Digital Caviar Black 2Tb 
Hard DriveOptical DriveCoolingCooling
Samsung SV843 960 GB LG WH14NS40 Cryorig R1 Ultimate 9x Gentle Typhoon 1850 rpm on case 
OSMonitorKeyboardPower
Windows 7 Pro x64 LG 27UD68 Ducky Legend with Vortex PBT Doubleshot Backlit... EVGA 1300W G2 
CaseMouseAudioOther
Cooler Master Storm Trooper Logitech G502 Proteus Asus Xonar Essence STX Lamptron Fanatic Fan Controller  
  hide details  
Reply
post #18 of 24
Quote:
Originally Posted by CrazyElf View Post

It is actually a pretty big problem. Pascal is basically trying to brute force its way though this. I don't think that these GPUs will age well. I've written elsewhere about this one:
http://www.overclock.net/t/1601496/is-the-founders-edition-gtx-1080-a-terrible-value-for-most-people-arguably-worse-than-the-gtx-680-and-gtx-980/0_100#post_25207585

Even without DX 12 consider the following
  • The 7970 has aged a lot better than the GTX 680
  • The 290X has aged a lot better than the large Keplers
  • The Hawaii cores have made relative gains to the GTX 980/970 and the Fury X versus the 980Ti/Titan X

The big reason why DX12 is not doing so well is because of the lack of a hardware scheduler. Nvidia used to have one in Fermi, but discarded it in Kepler and did not add one in Maxwell or Pascal. By contrast, AMD has kept one in the GCN GPUs.

Over the next year or two, I expect to see the performance of Maxwell follow a similar trajectory as to Kepler against GCN.

This is the front end controller of AMD's Hawaii and Fiji GPUs: Warning: Spoiler! (Click to show)

The hardware scheduler or Command Processor is the big reason, combined with the ACEs as to why the AMD GCN GPUs can scale well with DX12. Nvidia's strategy is more power efficient for DX11 and saves die space (allowing for more of everything else), but it doesn't work as well in the DX12 world.

This table from Ext3h sums it up: Warning: Spoiler! (Click to show)


We will not see a truly parallel Nvidia GPU until Volta, which may not be until 2018, perhaps later for the "large" die.

I"m thinking that we should wait until Vega comes out to see how this goes. It could be that AMD has messed up, but I think there's also a chance that they may win this round in a way not seen since Cypress. The real battle will be at 4k, max details, with Async enabled on Vega vs the large Pascal IMO.
Although it was considered inferior, the 7970 has proven to be a vastly better card.

Actually, now that Crossfire Frame Pacing is up, I'd argue that a 7970 CF is going to also run away from the GTX 680 SLI.


I read your topic. Good write up. There were a few things I pointed out in my head:
-Fury X coil whine was actually from Cooler Master pump
-The GTX 780 Ti was $699, not $650 by the way. So $799 and up in most cases.
-A lot of the GTX 980 Ti aftermarket cards were "well above" $699 as well.

Anyways, 40c is pretty bad for the throttling to start. I'm also finding it hilarious that more people aren't talking about the heat issues. If the card is hitting the mid 80s on the open air test bench you can bet you'll be pushing close to 90c with a closed case and other potentially overclocked components. This is where people "attempt" to call me bias for AMD, when AMD 290X had memes all over the place about the REFERENCE temps being hot, that was a cool thing to do at the time right? I was running GTX 670s at the time, but I didn't care about the reference temps since 3rd party cards are always the better option [or water cooling]. Now that Nvidia GTX 1080 is actually running hotter than the 290X and throttles so bad that some have stated it drops below the base clock, you don't hear a word about temps. I guess temps weren't an issue after the Fury X AIO cooler or the fact that the GTX 980 Ti wasn't the coolest card on the block either. I expected to see "Cool as a cucumber" memes from the same Nvidia fans that had no issues complaining about the 290X heat. Then again that would mean they would be "fair" which is something we are lacking.

Nvidia has the funds to improve their architecture, but as I basically stated above, they know they can get away with it. The fans will allow it to happen and Nvidia knows this. Why try? Why improve? Nvidia is going by their plans and will eventually get around to it, just like they said with those drivers Maxwell users are STILL waiting for [oh wait you need the drivers & the app....smh]. You were also spot on about the GTX Titan price point. Nvidia knows they can price gouge thanks to that $1000+ success. You can't forget how some people have crazy Titan setups, triple and four way SLI. However, once again gamers allowed this to happen. When you love a brand so much that you are willing to spend more than one thousand dollars on a GPU that's going to be outdated [or already outdated] by a cheaper card.....there's only one word for that person, > "insert here".

The 290X gave Titan performance for half the price of a Titan and the Fury X gives Titan X performance for nearly half the price. I suppose Nvidia has the "Ti" just for those who need something in between the Titan and the x80. I can't wait to see what the GTX 1080 "Ti" price will be. With AMD getting ready to hit the mainstream market, more DX12-Vulkan games releases, big Pascal and Vega should hopefully be priced below $700. That's only if AMD can force Nvidia to lower the price of some of their cards.

I love competition and I love the hype when new cards are revealed, but what I don't like is paper launches and shading marketing. Nvidia and AMD have two distinctly different strategies with the same goal in mind.
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
post #19 of 24
Quote:
Originally Posted by Kana-Maru View Post

I read your topic. Good write up. There were a few things I pointed out in my head:
-Fury X coil whine was actually from Cooler Master pump
-The GTX 780 Ti was $699, not $650 by the way. So $799 and up in most cases.
-A lot of the GTX 980 Ti aftermarket cards were "well above" $699 as well.

Anyways, 40c is pretty bad for the throttling to start. I'm also finding it hilarious that more people aren't talking about the heat issues. If the card is hitting the mid 80s on the open air test bench you can bet you'll be pushing close to 90c with a closed case and other potentially overclocked components. This is where people "attempt" to call me bias for AMD, when AMD 290X had memes all over the place about the REFERENCE temps being hot, that was a cool thing to do at the time right? I was running GTX 670s at the time, but I didn't care about the reference temps since 3rd party cards are always the better option [or water cooling]. Now that Nvidia GTX 1080 is actually running hotter than the 290X and throttles so bad that some have stated it drops below the base clock, you don't hear a word about temps. I guess temps weren't an issue after the Fury X AIO cooler or the fact that the GTX 980 Ti wasn't the coolest card on the block either. I expected to see "Cool as a cucumber" memes from the same Nvidia fans that had no issues complaining about the 290X heat. Then again that would mean they would be "fair" which is something we are lacking.

Nvidia has the funds to improve their architecture, but as I basically stated above, they know they can get away with it. The fans will allow it to happen and Nvidia knows this. Why try? Why improve? Nvidia is going by their plans and will eventually get around to it, just like they said with those drivers Maxwell users are STILL waiting for [oh wait you need the drivers & the app....smh]. You were also spot on about the GTX Titan price point. Nvidia knows they can price gouge thanks to that $1000+ success. You can't forget how some people have crazy Titan setups, triple and four way SLI. However, once again gamers allowed this to happen. When you love a brand so much that you are willing to spend more than one thousand dollars on a GPU that's going to be outdated [or already outdated] by a cheaper card.....there's only one word for that person, > "insert here".

The 290X gave Titan performance for half the price of a Titan and the Fury X gives Titan X performance for nearly half the price. I suppose Nvidia has the "Ti" just for those who need something in between the Titan and the x80. I can't wait to see what the GTX 1080 "Ti" price will be. With AMD getting ready to hit the mainstream market, more DX12-Vulkan games releases, big Pascal and Vega should hopefully be priced below $700. That's only if AMD can force Nvidia to lower the price of some of their cards.

I love competition and I love the hype when new cards are revealed, but what I don't like is paper launches and shading marketing. Nvidia and AMD have two distinctly different strategies with the same goal in mind.

The strategy is simple and I remember someone on OCN whom was a former Intel employee stating something like this:

"Intel's greatest competitor is its own customer install base. They have to show enough improvement in terms of percentages or speed to get these same customer install base to upgrade."

That is why we see Nvidia showing and comparing their new generation cards to their own previous generations. Their GPU focus is currently on PC gamers, and their GPUs are doing well on that. AMD's GPUs are pretty much the jack of all trades - they are aiming to grow the TAM and including future technologies in the cards which is why their cards are aging so well that either match or exceed Nvidia GPUs that are a generation newer through driver updates. While Nvidia, wants you to upgrade of course. I don't blame Nvidia and understand their strategy - how else are you going people to buy new GPUs if 70% to 80% of the current discrete GPU market already owns a Nvidia GPU. It may also explain why Nvidia GPUs do not age well with driver updates. I remember when the Titan came out they explained it was not primary a gaming GPU while the 980Ti was - but now they are comparing their new gen's gaming performance to the Titan.
Edited by speedyeggtart - 6/5/16 at 9:58pm
post #20 of 24
Quote:
Originally Posted by speedyeggtart View Post

The strategy is simple and I remember someone on OCN whom was a former Intel employee stating something like this:

"Intel's greatest competitor is its own customer install base. They have to show enough improvement in terms of percentages or speed to get these same customer install base to upgrade."

Obviously Intel's greatest competitor is the customer base after they used sleazy tactics and contracts in the early to mid 2000's to price AMD out. Intel used their big money to influence several key companies to avoid AMD or face a penalty. Or in other words. buy 90 - 95% Intel or lose business [contracts with Intel]. Intel also had a powerful marketing campaign going at the time. What made matters worse is that Intel still had a lot of restrictions for companies even after Intel got the 90+% of the business. That wasn't enough Intel added more restrictions for the remaining 5-10%. The contracts were very close to "bribing". Very unhealthy in the long run and now as you can see we are paying up to $1700+ for a Intel flagship CPU. All of the prices have increased across the board.

Quote:
That is why we see Nvidia showing and comparing their new generation cards to their own previous generations. Their GPU focus is currently on PC gamers, and their GPUs are doing well on that. AMD's GPUs are pretty much the jack of all trades - they are aiming to grow the TAM and including future technologies in the cards which is why their cards are aging so well that either match or exceed Nvidia GPUs that are a generation newer through driver updates.

I wouldn't compare AMD GPUs to a jack of all trades. AMD GPUs have always been competitive, but Nvidia has always had the better marketing and had more money to spread around for advertising. Even when AMD has better GPUs across different price points, Nvidia's money, marketing, sometimes lies or stretching the truth prevailed. Nvidia is just a strong name brand as well. AMD has to face Nvidia and Intel tactics, two very large companies who's main goal is to find ways to maximize profit no matter what. That's the purpose of a company, but you don't have to find slick ways to increase the market share by using underhanded tactics. Yet, those companies lack innovation. When it becomes solely about the cash, it appears neither cares about pushing the technology further. Thankfully this isn't the case with AMD.

AMD GPU market has been growing at a decent pace and they reclaimed some of the market back. Hopefully Polaris help them claw back to at least 30%.

Quote:
While Nvidia, wants you to upgrade of course. I don't blame Nvidia and understand their strategy - how else are you going people to buy new GPUs if 70% to 80% of the current discrete GPU market already owns a Nvidia GPU. It may also explain why Nvidia GPUs do not age well with driver updates. I remember when the Titan came out they explained it was not primary a gaming GPU while the 980Ti was - but now they are comparing their new gen's gaming performance to the Titan.

The same thing can be said about Intel. Obviously Intel want's people to upgrade at some point. Obviously "some" people will upgrade, but they have to compete with previous releases since the actual performance isn't generally increases enough to warrant a $500-$1000+ upgrade path. So yeah I can't blame Nvidia either, but they have a large market share and a tons of cash. You would think that since games have got them to where they are now they would focus on keeping prices GPU decent to keep market share...........nope they won't. Instead we get things like paper launches, more lies and Founders Edition for a premium price.

I felt that the Titan was just a way for Nvidia to see how much money they could make with an overpriced GPU. The AIBs have gotten away with GPUs priced high so let Nvidia try it right? Obviously the Titan wasn't going to be future proof or anything like that, but people still spent the cash and now you can expect a Titan regularly. Like you said, Nvidia has to find ways to get people to upgrade, even if it means neglecting their own GPUs. The issue with this is that some people pay good amounts of money for those GPUs and SLI setups. When Nvidia felt the need to put a "time limit" on $500+ purchases I knew it was time to jump ship. I witnessed the 670, 680, Titan, GTX 780 just become obsolete in no time with cheaper alternatives. That says a lot to me when decided to upgrade.
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Xeon 5660 @ 4.8Ghz [Highest OC 5.4Ghz] ASUS Sabertooth X58 AMD Fury X 24GB - 1600Mhz Triple Channel 
Hard DriveHard DriveHard DriveHard Drive
Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - B Seagate Barracuda 7200 1TB RAID 0 - C Seagate Barracuda 7200 1TB RAID 0 - C 
Hard DriveHard DriveHard DriveCooling
SSD 128GB RAID - A SSD 128GB RAID - A SSD 256GB  Antec Kuhler H2O 620 [Pull] 
OSMonitorPowerOther
Windows 10 Professional  Dual 24-inch Monitors EVGA SuperNOVA G2 1300W x2 Delta FFB1212EH-F00 Fan 4,000rpm  
Other
x4 Scythe Gentle Typhoon D1225C12BBAP-31 Fan 54... 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Graphics Cards - General
Overclock.net › Forums › Graphics Cards › Graphics Cards - General › Why is nobody concerned about 1080 negative scaling in DX12?