[YT] AMD R9 390 vs GTX 970 | Games of 2016 Showdown - Page 34 - Overclock.net - An Overclocking Community
Forum Jump: 

[YT] AMD R9 390 vs GTX 970 | Games of 2016 Showdown

Reply
 
Thread Tools
post #331 of 341 (permalink) Old 12-30-2016, 02:17 PM
Engineer of Malcontent
 
Dimaggio1103's Avatar
 
Join Date: Feb 2011
Location: The Mohave
Posts: 5,765
Rep: 256 (Unique: 200)
Quote:
Originally Posted by Asisvenia View Post

I've read some exaggerated comments about R9 290s but some guys must have been forgetten about these R9 290s were really having crappy problems about their core clocks, right? 290 generally was downclocking itself to 750-800mhz during games. Because its chip Hawaii was one of the unoptimized, inefficient chip ever made. It consumes lots of more power than Amd's formal results..

Had a 290 at launch never had such issues. AMD sucked for a long time driver wise but have been solid for the past couple years, more so of late. I went and got an Nvidia card before finally coming back to AMD for the 480 I have. AMD still has glaring issues, like allowing an API to be pushed without its own products supporting it. (looking at you ReLive). Other than those few small issues i'm impressed, as they have been on point with driver updates that lead to jumps in FPS across the board. AMD and Nvidia are great, but I do think Nvidia might be starting to slip, and AMD is trying to capitalize on that.

Ryzen 1700 3.8Ghz
32GB TridentZ Royals (Samsung B-die 3200 cas16)
EVGA 1070ti 200+Core / 300+Mem
Samsung 980 Pro NVME 500GB
EVGA 850w Gold
NZXT H500
DeepCool Castle 240


Dimaggio1103 is offline  
Sponsored Links
Advertisement
 
post #332 of 341 (permalink) Old 12-30-2016, 03:27 PM
Linux Lobbyist
 
caswow's Avatar
 
Join Date: Oct 2013
Posts: 451
Rep: 47 (Unique: 24)
lol driver problems. they just had a abysmal ref cooler thats it.
caswow is offline  
post #333 of 341 (permalink) Old 12-31-2016, 09:34 AM
Iconoclast
 
Blameless's Avatar
 
Join Date: Feb 2008
Posts: 30,140
Rep: 3140 (Unique: 1873)
Quote:
Originally Posted by budgetgamer120 View Post

Underneath the water is tesselated and but doesn't affect performance. Makes sense.

Occlusion culling has been a thing since well before Crysis 2. If you don't have line of sight to it, many game engines have provisions to not render it.
Quote:
Originally Posted by tpi2007 View Post

That was because of the crappy reference cooler and the fact that AMD didn't give enough time for AIBs to come up with their own customized cards on launch, so for the first month or more all you had were reference cards with reference coolers.

The 290X got so loud and hot that it had two modes, the default one and the Uber mode. It only beat the Titan in Uber mode (1 Ghz clockspeed). With a proper cooler it had no problem hitting 1 Ghz all the time and thus the default / Uber mode dichotomy became irrelevant. Nobody talks about it now and the R9 390X doesn't have an Uber mode because there were AIB cards from the beginning.

The 290 also had a firmware tweak if I'm not mistaken to get better consistency, but that heatsink didn't allow for miracles and it was loud and hot.

The 290/290X reference cooler was of virtually identical design (talking about the memory/VRM plate, the vapor chamber, and the fins/fin area, not the shroud) to the 7950/7970 reference cooler, but with a smaller vent.

Basically, they took a cooler that was sufficient for Tahiti, slapped it on Hawaii, and tried to figure out how to balance noise/temperature with hardware that was cable of producing 25-40% more heat.

290/290X performance looks a lot better if you have things setup to never throttle.

...rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add 'within the limits of the law,' because law is often but the tyrant's will, and always so when it violates the right of an individual. -- Thomas Jefferson
Blameless is offline  
Sponsored Links
Advertisement
 
post #334 of 341 (permalink) Old 12-31-2016, 09:42 AM
 
Join Date: Nov 2009
Location: Turkiye
Posts: 6,565
Quote:
Originally Posted by Blameless View Post

Occlusion culling has been a thing since well before Crysis 2. If you don't have line of sight to it, many game engines have provisions to not render it.
The 290/290X reference cooler was of virtually identical design (talking about the memory/VRM plate, the vapor chamber, and the fins/fin area, not the shroud) to the 7950/7970 reference cooler, but with a smaller vent.

Basically, they took a cooler that was sufficient for Tahiti, slapped it on Hawaii, and tried to figure out how to balance noise/temperature with hardware that was cable of producing 25-40% more heat.

290/290X performance looks a lot better if you have things setup to never throttle.
The issues with stock cooling has nothing to do with hardware, imo. What AMD cannot do, but ATi Tray Tools can in a simple package is run diagnostics on current operations whether there are artifacts in the outputs or not. If you cannot integrate that into the silicon, you cannot undervolt on the fly. It is that simple. The cards are always using excess power.
mtcn77 is offline  
post #335 of 341 (permalink) Old 12-31-2016, 10:06 AM
Iconoclast
 
Blameless's Avatar
 
Join Date: Feb 2008
Posts: 30,140
Rep: 3140 (Unique: 1873)
There are issues with dynamic power states on both AMD and NVIDIA parts to this day. In general, I'd prefer voltages to be set for absolute worst case scenarios and the cooling to match, rather than trying to squeeze every iota of performance out of a given power envelope, and risk slipping into instability.

I ran the PT1 firmware on my reference Hawaii parts because that solved essentially all performance/stability issues, then I set a fan profile to handle the heat.

With my non-reference parts, I've hand tweaked the firmware for each sample (not model, but each individual card), to ensure there are zero hardware related crashes and zero EDC errors with the clocks I'm using, at the lowest voltages and quietest fan curves practical for worst case loads.

The problem with the reference Hawaii parts was the cooler. Sure they could have spent more time and effort probably failing to save enough power with more complex DPM states and better error checking, but they could have spend three more dollars a card to put an adequate cooler on it and achieved the same practical result.

I'm not familar with the ATi Tray Tools feature you mention, as I haven't used ATi Tray Tools in years. I'm not even sure it supports recent GPUs.

...rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add 'within the limits of the law,' because law is often but the tyrant's will, and always so when it violates the right of an individual. -- Thomas Jefferson
Blameless is offline  
post #336 of 341 (permalink) Old 12-31-2016, 10:18 AM
 
Join Date: Nov 2009
Location: Turkiye
Posts: 6,565
No, no! That is the thing; the lower you set the voltage, the more you can get away with it. Trust me, you run that ATT once and it doesn't report any artifacts at the 'temperatures'(that is the key variable, all the tested results are invalid if ATT runs at 80°C, but say Crysis runs at 95°C) your card reaches equilibrium. Of course, you moderate between voltage level versus the temperature, but basically all AMD chips follow the same trend in my experience, both cpus and gpus.
mtcn77 is offline  
post #337 of 341 (permalink) Old 12-31-2016, 10:48 AM
 
Join Date: Nov 2009
Location: Turkiye
Posts: 6,565
I recognise the scepticism towards such an idea and it is totally justified, but it doesn't have to pull unsafe amounts of wattage to find out the limits of the silicon.
Cards HD6900 and up come with Powerplay, the vrm protection measure that throttles the card in case of a vrm power surge that could be a showstopper at any minute. Depending on how the card is monitored, ATT may or may not be regarded as a power virus and near threshold testing cannot proceed as the frequencies are nonlinear. The gambit is to turn vrm protection off which could blow the card in an instant should you run an unsafe application. Note to chrome flash plugin that did it in my case.
That said, it is 100% safer to run the card over extended periods at half its total power demand which is directly linked to its potential threshold. 6 billion transistors at 1.2v versus 1.0v makes a big chunk of power. It theoretically is 40% difference I believe, however coupled with leakiness it is cubed - 70% - since the cooling is constant. It is not even plus 0.1v more voltage that triggers 500 watts in total.
You are inherently safer with or without powertune if card is kept in check with a more consistent measure than the one in place - temperature versus cooling - which is already intolerable.
The procedure is both conditionally unsafe and risky which is why it is easier for it is to be integrated at the factory than in the hands of the client, but once you know the limits of the silicon you have got, the stock heatsink will need to expel much less heat for stable operation. It is crazy to think voltages only safe for subzero cooling could be the default to linearise unit variation.
mtcn77 is offline  
post #338 of 341 (permalink) Old 12-31-2016, 11:19 AM
Iconoclast
 
Blameless's Avatar
 
Join Date: Feb 2008
Posts: 30,140
Rep: 3140 (Unique: 1873)
I thought you were implying it had an automatic way to detect errors at different power states and compensate, but you seem to be referring to the artifact scanner. ATI Tray Tools artifact scanner isn't that good. There are plenty of situations where it will say things are fine and the card can still be producing errors in other scenarios. There is a reason I stopped using the program.

...rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add 'within the limits of the law,' because law is often but the tyrant's will, and always so when it violates the right of an individual. -- Thomas Jefferson
Blameless is offline  
post #339 of 341 (permalink) Old 12-31-2016, 11:24 AM
 
Join Date: Nov 2009
Location: Turkiye
Posts: 6,565
It won't recognise memory controller errors, I give you that, but you can see them on the screen. There is no way to miss it, they cross the screen longitudinally on a pinstripe pattern. The wider it is the hotter it is, or less the voltage(in which case the whole screen is pinstripe).redface.gif
mtcn77 is offline  
post #340 of 341 (permalink) Old 12-31-2016, 11:34 AM
 
Join Date: Nov 2009
Location: Turkiye
Posts: 6,565
The idea is that AMD could integrate it to the silicon, shouldn't be that hard if some software written 20 years ago can do it. Core and memory are the two variables it can monitor correctly if nothing else. You have to give it two is better than zero. AMD can get it to monitor all three if the idea takes off.
One of the rare moments I have strong feelings I'm on the money with this concept. What is there to lose? Designing 500 watt stable circuits ought to be more expensive than the compensation brought by this idea.
mtcn77 is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off