Overclock.net - An Overclocking Community - Reply to Topic
Thread: [YT] AMD R9 390 vs GTX 970 | Games of 2016 Showdown Reply to Thread
Title:
Message:

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in


  Additional Options
Miscellaneous Options

  Topic Review (Newest First)
12-31-2016 11:49 AM
mtcn77 One thing I left out is, the memory controller is more predictable than the other two. It basically runs until either you see squares in which case it cannot keep up with the memory modules as voltage is too low, or pinstripes appear in which case it is too hot. The pinstripes are the only errors ATT will defer to you which is easy - don't overvolt in the first place. I can overvolt up to 40mV without them appearing(been trying for 1375MHz on this chip, but cannot with 1250MHz rated modules). Maybe if I knew how much I can overvolt GDDR5 without permanent damage, I could reach it at 1.9V.biggrin.gif
12-31-2016 11:34 AM
mtcn77 The idea is that AMD could integrate it to the silicon, shouldn't be that hard if some software written 20 years ago can do it. Core and memory are the two variables it can monitor correctly if nothing else. You have to give it two is better than zero. AMD can get it to monitor all three if the idea takes off.
One of the rare moments I have strong feelings I'm on the money with this concept. What is there to lose? Designing 500 watt stable circuits ought to be more expensive than the compensation brought by this idea.
12-31-2016 11:24 AM
mtcn77 It won't recognise memory controller errors, I give you that, but you can see them on the screen. There is no way to miss it, they cross the screen longitudinally on a pinstripe pattern. The wider it is the hotter it is, or less the voltage(in which case the whole screen is pinstripe).redface.gif
12-31-2016 11:19 AM
Blameless I thought you were implying it had an automatic way to detect errors at different power states and compensate, but you seem to be referring to the artifact scanner. ATI Tray Tools artifact scanner isn't that good. There are plenty of situations where it will say things are fine and the card can still be producing errors in other scenarios. There is a reason I stopped using the program.
12-31-2016 10:48 AM
mtcn77 I recognise the scepticism towards such an idea and it is totally justified, but it doesn't have to pull unsafe amounts of wattage to find out the limits of the silicon.
Cards HD6900 and up come with Powerplay, the vrm protection measure that throttles the card in case of a vrm power surge that could be a showstopper at any minute. Depending on how the card is monitored, ATT may or may not be regarded as a power virus and near threshold testing cannot proceed as the frequencies are nonlinear. The gambit is to turn vrm protection off which could blow the card in an instant should you run an unsafe application. Note to chrome flash plugin that did it in my case.
That said, it is 100% safer to run the card over extended periods at half its total power demand which is directly linked to its potential threshold. 6 billion transistors at 1.2v versus 1.0v makes a big chunk of power. It theoretically is 40% difference I believe, however coupled with leakiness it is cubed - 70% - since the cooling is constant. It is not even plus 0.1v more voltage that triggers 500 watts in total.
You are inherently safer with or without powertune if card is kept in check with a more consistent measure than the one in place - temperature versus cooling - which is already intolerable.
The procedure is both conditionally unsafe and risky which is why it is easier for it is to be integrated at the factory than in the hands of the client, but once you know the limits of the silicon you have got, the stock heatsink will need to expel much less heat for stable operation. It is crazy to think voltages only safe for subzero cooling could be the default to linearise unit variation.
12-31-2016 10:18 AM
mtcn77 No, no! That is the thing; the lower you set the voltage, the more you can get away with it. Trust me, you run that ATT once and it doesn't report any artifacts at the 'temperatures'(that is the key variable, all the tested results are invalid if ATT runs at 80°C, but say Crysis runs at 95°C) your card reaches equilibrium. Of course, you moderate between voltage level versus the temperature, but basically all AMD chips follow the same trend in my experience, both cpus and gpus.
12-31-2016 10:06 AM
Blameless There are issues with dynamic power states on both AMD and NVIDIA parts to this day. In general, I'd prefer voltages to be set for absolute worst case scenarios and the cooling to match, rather than trying to squeeze every iota of performance out of a given power envelope, and risk slipping into instability.

I ran the PT1 firmware on my reference Hawaii parts because that solved essentially all performance/stability issues, then I set a fan profile to handle the heat.

With my non-reference parts, I've hand tweaked the firmware for each sample (not model, but each individual card), to ensure there are zero hardware related crashes and zero EDC errors with the clocks I'm using, at the lowest voltages and quietest fan curves practical for worst case loads.

The problem with the reference Hawaii parts was the cooler. Sure they could have spent more time and effort probably failing to save enough power with more complex DPM states and better error checking, but they could have spend three more dollars a card to put an adequate cooler on it and achieved the same practical result.

I'm not familar with the ATi Tray Tools feature you mention, as I haven't used ATi Tray Tools in years. I'm not even sure it supports recent GPUs.
12-31-2016 09:42 AM
mtcn77
Quote:
Originally Posted by Blameless View Post

Occlusion culling has been a thing since well before Crysis 2. If you don't have line of sight to it, many game engines have provisions to not render it.
The 290/290X reference cooler was of virtually identical design (talking about the memory/VRM plate, the vapor chamber, and the fins/fin area, not the shroud) to the 7950/7970 reference cooler, but with a smaller vent.

Basically, they took a cooler that was sufficient for Tahiti, slapped it on Hawaii, and tried to figure out how to balance noise/temperature with hardware that was cable of producing 25-40% more heat.

290/290X performance looks a lot better if you have things setup to never throttle.
The issues with stock cooling has nothing to do with hardware, imo. What AMD cannot do, but ATi Tray Tools can in a simple package is run diagnostics on current operations whether there are artifacts in the outputs or not. If you cannot integrate that into the silicon, you cannot undervolt on the fly. It is that simple. The cards are always using excess power.
12-31-2016 09:34 AM
Blameless
Quote:
Originally Posted by budgetgamer120 View Post

Underneath the water is tesselated and but doesn't affect performance. Makes sense.

Occlusion culling has been a thing since well before Crysis 2. If you don't have line of sight to it, many game engines have provisions to not render it.
Quote:
Originally Posted by tpi2007 View Post

That was because of the crappy reference cooler and the fact that AMD didn't give enough time for AIBs to come up with their own customized cards on launch, so for the first month or more all you had were reference cards with reference coolers.

The 290X got so loud and hot that it had two modes, the default one and the Uber mode. It only beat the Titan in Uber mode (1 Ghz clockspeed). With a proper cooler it had no problem hitting 1 Ghz all the time and thus the default / Uber mode dichotomy became irrelevant. Nobody talks about it now and the R9 390X doesn't have an Uber mode because there were AIB cards from the beginning.

The 290 also had a firmware tweak if I'm not mistaken to get better consistency, but that heatsink didn't allow for miracles and it was loud and hot.

The 290/290X reference cooler was of virtually identical design (talking about the memory/VRM plate, the vapor chamber, and the fins/fin area, not the shroud) to the 7950/7970 reference cooler, but with a smaller vent.

Basically, they took a cooler that was sufficient for Tahiti, slapped it on Hawaii, and tried to figure out how to balance noise/temperature with hardware that was cable of producing 25-40% more heat.

290/290X performance looks a lot better if you have things setup to never throttle.
12-30-2016 03:27 PM
caswow lol driver problems. they just had a abysmal ref cooler thats it.
This thread has more than 10 replies. Click here to review the whole thread.

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off