Overclock.net › Forums › Graphics Cards › AMD/ATI › [Official] Polaris Owners Club
New Posts  All Forums:Forum Nav:

[Official] Polaris Owners Club - Page 75

post #741 of 4362
Quote:
Originally Posted by JackCY View Post

Here custom 1060 6GB $259, reference 480 about the same if ever available.
Where? In Taiwan? In USA? Mainland EU gets none for months even reference cards are almost none and only some shops had them but now not even those.
Custom cards non existent if even listed at retailers frown.gif
NV is so much better in that regard, 1060 launched later but was much easier to acquire until now when everything (1060/480) pretty much went out of stock for both brands.
I do notice.

#1 If they've put 290 coolers on 480 it would be adequate but they've mostly used coolers from something like a 270 instead which are not enough for the 14nm and 232mm2.

#2 NITRO is loud as hell. And the rest of custom 480s don't seem to be doing much better and certainly aren't priced competitively at all.
Depends on support in games. Older and current 1060 wins. Possible newer or more AMD optimized AMD wins. As always, but so far NV has bigger market share and most games are optimized for NV frown.gif No matter what API.


#3 Seeing 480 Nitro at max speed being a big power hog is disappointing when around stock clocks the 1060 can use up to 50W less when comparing total system power. 480 OC gets nuts with power consumption toward being a 380x/280x power hog.

#1
There is a fan bug for the AIB cards which is pretty easy to fix via software. Reference cards are not terrible either, and they use about the same power as the R9 270X and R9 285/380. RX 470 is on average pulling ~90w for the GPU at stock clocks while the RX 480 is pulling ~110w. With the rest of the board added in its ~120w for the 470 and ~150w for the 480.

If I dont care about noise, I could push 1400 on the stock cooler by cranking the fan up. If I want it pretty quiet, I can just bump up the power limit a bit and leave everything else as-is.

#2
The prices on NewEgg/Amazon/wherever represent the demand out stripping the supply. If demand is high and supply is low, retailers jack up prices and pocket the extra profit. It is not likely that a significant portion of the extra profit is going to AMD, the largest chunk is going to the distribution warehouses and perhaps the AIB partners.

#3
50w at the wall is ~40w actual difference due to losses in the PSU, nobody cares outside of a very few using it as a debate point. A 500w PSU (minimum recommendation for a gaming build) is more than enough for either card.

#4
If you don't own one why are you trolling the owners thread?
Edited by KarathKasun - 8/21/16 at 6:55am
μRyzen
(12 items)
 
Mini Box
(4 items)
 
 
CPUMotherboardGraphicsRAM
Ryzen R5 1400 MSI B350M Gaming Pro Zotac GTX 670 4GB G.SKILL FORTIS Series 8GB (2 x 4GB) 
Hard DriveCoolingOSOS
WD Green 3tb Wraith Stealth Windows 10 Debian 8.7 
MonitorKeyboardPowerMouse
ViewSonic VX-2257-8 Chinese backlit mechanical Kingwin 850w Chinese laser optical 
CPUMotherboardGraphicsRAM
Athlon 5350 Asus AM1I-A EVGA GTX 750 Ti SC 2x4GB DDR 3 1333 
  hide details  
Reply
μRyzen
(12 items)
 
Mini Box
(4 items)
 
 
CPUMotherboardGraphicsRAM
Ryzen R5 1400 MSI B350M Gaming Pro Zotac GTX 670 4GB G.SKILL FORTIS Series 8GB (2 x 4GB) 
Hard DriveCoolingOSOS
WD Green 3tb Wraith Stealth Windows 10 Debian 8.7 
MonitorKeyboardPowerMouse
ViewSonic VX-2257-8 Chinese backlit mechanical Kingwin 850w Chinese laser optical 
CPUMotherboardGraphicsRAM
Athlon 5350 Asus AM1I-A EVGA GTX 750 Ti SC 2x4GB DDR 3 1333 
  hide details  
Reply
post #742 of 4362
Quote:
Originally Posted by JackCY View Post

Here custom 1060 6GB $259, reference 480 about the same if ever available.

Where is "here?" That's a factor I often forget to consider - in Europe, the 1060 is an obvious choice as AMD cards tend to be overpriced for some reason, which also hinders availability.
Quote:
Originally Posted by JackCY View Post

Custom cards non existent if even listed at retailers frown.gif

There's a difference between them exiting and being in stock thumb.gif

Demand is extremely high, thousands of these cards are being sucked up by miners because of their exceptional compute performance. This entire industry finds nVidia hardware to be inferior to AMD - and energy is one of the main costs in mining, so saving 100W+ over the former mining heavy-weight (Hawaii) is well worth it. Some of these miners are buying the cards by the dozen.
Quote:
Originally Posted by JackCY View Post

NV is so much better in that regard, 1060 launched later but was much easier to acquire until now when everything (1060/480) pretty much went out of stock for both brands.
Quote:
Originally Posted by JackCY View Post

I do notice.

Except in GTA-V, there is not performance difference large enough for anyone to notice. Release-day BF4 performance was a problem it seems, but I'm seeing GTX 1060 levels of performance across the board on my system while running stock RX 480 clocks (but no throttling thanks to under-volting and superior cooling).
Quote:
Originally Posted by JackCY View Post

If they've put 290 coolers on 480 it would be adequate but they've mostly used coolers from something like a 270 instead which are not enough for the 14nm and 232mm2.

Yes, the reference heatsink was a little too small. AMD has a really bad habit of this, which seemed to be broken with Fiji, but they have gone back to it... Another 1/2" of metal would have helped more as well... but, in the end, we're only talking about noise, not temperatures. The chip will get hot quickly no matter what, 14nm is just that way (even for Intel). Cost cutting went one step too far. $1 more in base costs would have made the card look much better.
Quote:
Originally Posted by JackCY View Post

NITRO is loud as hell. And the rest of custom 480s don't seem to be doing much better and certainly aren't priced competitively at all.

I had an RX 470 Nitro+, it was nearly completely silent unless I manually turned the fans up. The VRMs maxed at 69C under Furmark, and my GPU temps were only about 72~75C, barely hitting the target temperature.

My XFX RX 480 GTR, however, is louder. Not loud, but louder. It had some minor coil whine, but it went away after a day's worth of bench-marking.
Quote:
Originally Posted by JackCY View Post

Depends on support in games. Older and current 1060 wins. Possible newer or more AMD optimized AMD wins. As always, but so far NV has bigger market share and most games are optimized for NV frown.gif No matter what API.

This is changing rapidly into the other direction. Most big new games prefer AMD over nVidia. In three years, the GTX 1060 will barely be keeping up with the the RX 480, much like R9 290's competition, the GTX 780, can't compete against it any more: http://www.anandtech.com/bench/product/1772?vs=1718

How it looked two years ago: http://www.anandtech.com/bench/product/1068?vs=1036
Quote:
Originally Posted by JackCY View Post

Seeing 480 Nitro at max speed being a big power hog is disappointing when around stock clocks the 1060 can use up to 50W less when comparing total system power. 480 OC gets nuts with power consumption toward being a 380x/280x power hog.

The Nitro is pushing too much voltage. As in 150mv too much, according to pretty much everyone who has tested it. Power usage:

Stock Voltage:

Under voltage:


That's 50W with the move of a slider. My XFX RX 480 is even more efficient than that. I pull 168W over idle at 1288Mhz w/ 2175Mhz RAM in Heaven. A few watts will be for the extra CPU and RAM usage and my power supply is only ~85% efficient at 288W, so the card is only pulling around 135~140W while clocked to match the GTX 1060 - which pulls 120W nearly on the nose... who cares about such a small difference?
Quote:
Originally Posted by JackCY View Post

CF and SLI are irrelevant to me, don't care for either. The only proper multi chip GPU is if they try reducing costs by splitting the GPU into parts and using interposer to connect them and have easy scalability, but to so far they just cram it all into one die.

Agreed, multi-GPU isn't worth it outside of compute. nVidia gimped using their cards with an AMD card for use as a standalone PhysX accelerator, otherwise I'd actually be doing it on two of my systems (for Batman and Alice).
Quote:
Originally Posted by JackCY View Post

Most games in 2016 and 17 are still DX11 and if they are DX12 they are still conversions from DX11 or optimized for NV single threaded use. The only decent AMD titles so far are COD BO3 and Doom, that's it. Hitman is a botched up mess, Deus Ex supposedly being built on the same engine... not looking good so far.

Quite literally every AAA game being released this year or next will support DX12 or Vulkan. Most DX12 games perform very well... for AMD. Also, Hitman works great for me, though it does crash in one map if I spend an hour killing absolutely everyone. DX12 had some zooming bugs, but that was patched. It's not a matter of these games optimizing for AMD or multi-thread, it's just a matter of tagging their queues (easier said than done at times). Time Spy sees comparatively poor scaling on AMD because they did nearly the minimum for DX12 support that they could because they think that will be representative... I don't think it will be, games simply have more compute going on than Time Spy.
Quote:
Originally Posted by JackCY View Post

AMD is way too slow in R&D compared to NV, pretty much NV refreshes their whole line up from low - main - high - ... end, where as AMD only does 33-50% of that, low - mainstream or highend, pretty much none in above highend.

AMD is doing very well for a company on the verge of bankruptcy. They have to choose their fights wisely.
Quote:
Originally Posted by JackCY View Post

So when VEGA AMD high end comes NV will launch a completely whole new line up

Polaris is woefully memory bottlenecked, Vega will not be. Vega is also IP 9, so it's a whole new ball-game coming up. nVidia was king of DX11, but that says nothing for DX12. They have a lot of work to do in order to fix their hardware and drivers for parallel execution. They will do it, no doubt, and they will have the benefit of hindsight, but AMD will be on their fifth generation of fully asynchronous hardware by then.

Did you know that Polaris has half the async hardware as Fiji? It scales just as well, though. Vega is a large chip, like Fiji, but with all of the updates, and no memory bottlenecks. If the rumors about its size are true, nVidia better bring their A game (which I'm sure they will).
Quote:
Originally Posted by JackCY View Post

1060s sell like hot cakes, 1070s even more

Yes, they are selling well, but that was a given, AMD didn't release anything to compete with the 1070 or 1080... and nVidia cards would still be outselling them even if AMD's cards were better in every way. RX 480 has supply issues and is only a 232mm^2 chip. nVidia is using a more mature process and can afford to buy larger batches. AMD has every disadvantage in this fight, but that's not a valid negative against the product itself.
post #743 of 4362
Europe.
In 3 years the 1060/480 class cards are irrelevant even if 480 finally manages to be faster in the titles at that time. Yes the P10 has more transistors and more performance hidden in it and it is not being always all used ATM, similar with previous GCN cards but how much more can be milked out of GCN when there already was the raise over the time of previous GCN cards? I think not much anymore. Yes an application coded in DX12/Vulkan for AMD arch. is fast for AMD and slower for NV but so does it apply vice versa, NV has better support for developers and many code for NV arch. for a long time. All in all the support from AMD for developers, HW encoding, rendering, ... is rather poor compared to NV due to lack of invested resources.
Quote:
AMD is doing very well for a company on the verge of bankruptcy. They have to choose their fights wisely.
Yes.

NV HW already had "async and such similar HW" before, but they dropped/stripped it to make more efficient GPUs which they really do push hard and still keep the performance lead as well.

AMD often uses slower VRAM chips, only recently did they step up to buy the 8GHz GDDR5 and even then go ahead and limit OC to 2250 sometimes even 2050 only. Same VRAM chips on NV are not new rather long time used and on Pascal go above 2250MHz (x4).

Yes many of the P10 chips are overvolted by default and the annoying idle and multimonitor idle is still overvolted with VRAM at max clock. They never fix/change that.
And what is up with the Wattman? So far I read it keeps crashing for people all the time frown.gif

As far as volume goes, last time I read that AMD was shipping 100k chips to partners which I found kind of ridiculously low considering a single shop had thousands of unfulfilled orders waiting for stock. Europe gets barely any shipments whatsoever, half the retailers don't reply to 480/1060 inquiries and the rest that does says they don't know, their supplier doesn't know, when asking official distributor for the country of a specific AMD board partner... they don't know either. Guess all they do is ship from Asia located factories to Asia located miners. If there are any cards poping up in EU it's 10 at a time and sell out within the hour or few at ridiculous cost.

That stupid mining is killing AMD and it's AIBs. Cards do not make it to gamers, they don't get market share in gaming rather in mining instead. The cards are bought, ran 24/7 as long as they run and returned for a refund when they die or it's no longer profitable to run them. Partners then get 10s-100s of cards in a single RMA back and take a loss.
NV cards can mine too, mostly on Linux maybe even better/more profitable than AMD, but it's not the popular choice...

P10 vs GP106 transistor count/area wise about equal power consumption. But performance/power wise no chance for P10 without optimizing for newer GCN like Doom Vulkan.

---

So in your opinion what is the best custom cooler design 480 8GB worth buying so far? Especially one that costs less than $259 for which a custom well cooled 1060 can be bought. I have not seen one yet.
post #744 of 4362
What's the thickness of the thermal pads on the vrm and ram on the reference cooler? I have to put mine back on before I send it in to Sapphire. CompuBench OpenCL T-Rex test is giving me weird aliasing artifacts especially in the foliage that youtube videos of the same test don't show. Between that and the other two defects, it just seems like I might have to RMA. I have time. I'm waiting to grab a reference pcb that will work with my block. I have auto-notify for all the reference cards. I'm hoping to not pay to much more than my original card...but coiners are killing me. I can't be without a decent video card for the RMA time frame. If sapphire will even RMA the card.

@JackCY - There isn't a good cooling solution for a 480 that costs less than 259 at the moment due to stupid miners. I purchased my reference card day 1 with a rebate deal from AMEX that gave me the 8GB card for 199. The second card I'm trying to buy will cost me between 269 and 300 for a reference pcb. http://www.newegg.com/Product/Product.aspx?Item=N82E16814150773&cm_re=rx_480-_-14-150-773-_-Product looks like a good choice for 300. Oops I spoke too soon. It's out of stock now too.

I think drivers will mature faster than 3 years. That's a straw man argument. No one knows what will be relevant to modern games in 3 years, we can only guess at the moment. 2x rx 480 should still be relevant since the rx480 and project Scorpio have about the same performance. As long as a system is around 2x the performance of the latest console it should play everything at acceptable levels. In theory anyway, and that depend on what you find acceptable. If you want 90fps ultra 4k, you're right it's probably not going to do that.

Nvidia seems to abandon micro architecture driver optimizations after about a year and a half. So that would be a better time limit for the rx 480 to beat the 1060 from a driver perspective.
Quote:
NV HW already had "async and such similar HW" before, but they dropped/stripped it to make more efficient GPUs which they really do push hard and still keep the performance lead as well.
- I'm not sure what hardware you're talking about. They removed double precision hardware in Maxwell, but I don't remember them ever having any async or out of order type cuda schedulers or the like that was subsequently removed. I could very well be wrong. They talked about out of order schedulers many times at events but I don't remember specifics. Please enlighten me.

I'm going to take your advice about the RMA on my rx480 probably. It depends on what I have time for in the near future. My father will finally be home from the hospital on Monday and I may be able to finally find time to to take my loop apart and rebuild it. Maybe I'll do a few suicide bench runs before I do.
post #745 of 4362
Fermi had all kinds of advanced scheduling hardware on die, but games didn't use it and DX12 was still a mere idea at that time.
μRyzen
(12 items)
 
Mini Box
(4 items)
 
 
CPUMotherboardGraphicsRAM
Ryzen R5 1400 MSI B350M Gaming Pro Zotac GTX 670 4GB G.SKILL FORTIS Series 8GB (2 x 4GB) 
Hard DriveCoolingOSOS
WD Green 3tb Wraith Stealth Windows 10 Debian 8.7 
MonitorKeyboardPowerMouse
ViewSonic VX-2257-8 Chinese backlit mechanical Kingwin 850w Chinese laser optical 
CPUMotherboardGraphicsRAM
Athlon 5350 Asus AM1I-A EVGA GTX 750 Ti SC 2x4GB DDR 3 1333 
  hide details  
Reply
μRyzen
(12 items)
 
Mini Box
(4 items)
 
 
CPUMotherboardGraphicsRAM
Ryzen R5 1400 MSI B350M Gaming Pro Zotac GTX 670 4GB G.SKILL FORTIS Series 8GB (2 x 4GB) 
Hard DriveCoolingOSOS
WD Green 3tb Wraith Stealth Windows 10 Debian 8.7 
MonitorKeyboardPowerMouse
ViewSonic VX-2257-8 Chinese backlit mechanical Kingwin 850w Chinese laser optical 
CPUMotherboardGraphicsRAM
Athlon 5350 Asus AM1I-A EVGA GTX 750 Ti SC 2x4GB DDR 3 1333 
  hide details  
Reply
post #746 of 4362
the GigaThread Engine? Interesting. That's been a while. Was it fine grain out of order? I didn't realize.

Last nvidia card I had was the EVGA 8800 GTS. Too bad it was limited by the vram size and lack of h.264 acceleration. I should have gotten the GTX it would have lasted a little longer in 1080p, but I would have switched at the start of the mining craze anyway. That's kinda the way I feel about the rx480. So I get JackCY's point. Were're at the start of the transition to 4k. The cards from the beginning of the 1080p era weren't relevant for as long as the middle era cards. In a few years there should be some really nice mid-range 4k cards available that will be amazing to people still on 1080p or 1440p.
post #747 of 4362
Quote:
Originally Posted by JackCY View Post

In 3 years the 1060/480 class cards are irrelevant even if 480 finally manages to be faster in the titles at that time.

Just like the 7970 is irrelevant today? Oh, wait... its level of performance has only just now been relegated to the low end discrete graphics... and it's capable of pretty decent 1080p gaming, albeit with compromises.

GTX 1060/RX 480 performance levels will be useful in three years. RX 480 more so, without doubt.
Quote:
Originally Posted by JackCY View Post

how much more can be milked out of GCN when there already was the raise over the time of previous GCN cards?

Polaris has 15% higher IPC for each SP that is not being exploited. AMD has already shown that they know how to use drivers to reduce memory bandwidth requirements, which is holding back Polaris immensely, so it would not be out of the realm of expectations to see 15% or more in improvements in the next two years - which is not terribly different than what was seen with Hawaii. In fact, Hawaii saw some specific game improvements in the area of 30% (BF4, especially). Polaris has already seen performance improvements in just the few weeks since its release.
Quote:
Originally Posted by JackCY View Post

All in all the support from AMD for developers, HW encoding, rendering, ... is rather poor compared to NV due to lack of invested resources.

AMD encoding and rendering is superior to nVidia, in my experience with it, though you have specific programs which did not use OpenCL or had poor implementations. I use a whole host of applications for my video needs and every single one of them works better with AMD cards, and one even has AMD/ATi branded add-ons (Cyberlink PowerDirector).

The real problem with nVidia is that you have to buy the high end hardware to get the compute power (if you get it even then). AMD offers it across its lineup.
Quote:
Originally Posted by JackCY View Post

NV HW already had "async and such similar HW" before, but they dropped/stripped it to make more efficient GPUs which they really do push hard and still keep the performance lead as well.

I assume you're talking about the thread block scheduler? That was still incapable of async compute. nVidia's context switches were too costly and numerous other design issues prevented heavy graphical and computational workloads at the same time, which is what DX12 async compute requires.
Quote:
Originally Posted by JackCY View Post

AMD often uses slower VRAM chips, only recently did they step up to buy the 8GHz GDDR5 and even then go ahead and limit OC to 2250 sometimes even 2050 only. Same VRAM chips on NV are not new rather long time used and on Pascal go above 2250MHz (x4).

Yes, but wider buses and more net bandwidth. nVidia is still a step ahead for memory compression, but AMD is making quick headway. nVidia is also ahead for efficiency, but AMD is catching up.. and nVidia is losing IPC for a little extra frequency, whereas AMD gained both (then squandered the IPC gain by a too narrow memory bus).
Quote:
Originally Posted by JackCY View Post

Yes many of the P10 chips are overvolted by default and the annoying idle and multimonitor idle is still overvolted with VRAM at max clock. They never fix/change that.

The nVidia cards I have are the same for multi-monitor idle. Still, this new generation of memory doesn't seem to actually use much power unless it is being used. Once the updated Afterburner is out and I can underclock the memory, I will do some thorough test on the matter. From the looks of Wattman, AMD expects to start stepping the memory frequency just like the GPU frequency. So long as we can customize this even with multiple monitors, this problem will vanish. Usually, though, I just set 2D clocks with Afterburner and use my keyboard's macro keys for four different clock and power configurations.
Quote:
Originally Posted by JackCY View Post

And what is up with the Wattman? So far I read it keeps crashing for people all the time frown.gif

Yes it does. New tech, fancy interface, lots of new features, and a blindingly fast release cycle... you're gonna see those issues. But it's also important to remember that most of these drivers are actually beta drivers, but people love to tinker tongue.gif
Quote:
Originally Posted by JackCY View Post

As far as volume goes, last time I read that AMD was shipping 100k chips to partners which I found kind of ridiculously low considering a single shop had thousands of unfulfilled orders waiting for stock.

100k chips is very respectable volume (but over what time period?) for a new node with the largest chip ever made on that node (yes, Polaris 10 is the biggest chip ever made on 14nm LPP). After the dies are created, they have to be packaged, then sent to the partners, who then have to build cards with them. nVidia pulled a fast one on everybody and released GTX1060 long before it was supposed to be released. This schedule caused AMD to sit back and make more changes to Polaris 10 prior to release. RX 480's revision is C7. RX 460's is CF. GTX 1080 is A1.

It's positively amazing AMD got Polaris out when they did. nVidia's feat, on the other hand, is actually more of an embarrassment. They've been talking up Pascal forever... and it's just a higher clocking version of Maxwell 2 with a few other minor mods. Pascal's entire benefit, beyond clock speeds, is derived from 16nm FF.
Quote:
Originally Posted by JackCY View Post

Europe gets barely any shipments whatsoever, half the retailers don't reply to 480/1060 inquiries and the rest that does says they don't know, their supplier doesn't know, when asking official distributor for the country of a specific AMD board partner... they don't know either. Guess all they do is ship from Asia located factories to Asia located miners. If there are any cards poping up in EU it's 10 at a time and sell out within the hour or few at ridiculous cost.

Europe has always had problems with computer hardware pricing. I spent a couple of months there and was appalled by the pricing - and the availability. This is nothing new. High energy prices in Europe can also make 40 or 50W more usage something that means more than it does here... where my electricity runs me $100/month for my 1700 Sq Ft home with central heat and air.

The RX 480, however, comes into stock frequently. Prices are inflated right now because people are more than willing to pay them. I usually like to use Amazon's sales to get an idea of what cards are selling most, but they haven't been stocking these cards well at all compared to NewEgg. I bought mine from Jet.com.

https://www.nowinstock.net/computers/videocards/amd/rx480/
Quote:
Originally Posted by JackCY View Post

That stupid mining is killing AMD and it's AIBs. Cards do not make it to gamers, they don't get market share in gaming rather in mining instead. The cards are bought, ran 24/7 as long as they run and returned for a refund when they die or it's no longer profitable to run them. Partners then get 10s-100s of cards in a single RMA back and take a loss.

Yes, the mining can have a real negative impact... however the failure rates aren't really any higher, the cards have all manner of built-in protections. They might fail faster, but the end rate won't be much larger - those cards were likely defective from the start and would have needed and RMA anyway.
Quote:
Originally Posted by JackCY View Post

NV cards can mine too, mostly on Linux maybe even better/more profitable than AMD, but it's not the popular choice...

They can mine, sure, but not well. AMD performance can be several times higher, which is why they are preferred. Miners are looking for the best return on their investment, there's zero brand loyalty going on there - just math.
Quote:
Originally Posted by JackCY View Post

P10 vs GP106 transistor count/area wise about equal power consumption. But performance/power wise no chance for P10 without optimizing for newer GCN like Doom Vulkan.

Power efficiency is a benefit from less capable hardware like that in the GTX 1060, no doubt.
Quote:
Originally Posted by JackCY View Post

So in your opinion what is the best custom cooler design 480 8GB worth buying so far? Especially one that costs less than $259 for which a custom well cooled 1060 can be bought. I have not seen one yet.

That's tough, I've had the Sapphire Nitro+ RX 470 and the XFX RX 480 GTR in hand and did a lot of testing... the Nitro+ was noticeably more quiet, but slightly warmer.. even after I undervolted. The total heat capacity of the XFX heatsink should be notably higher - more so than the GPU temperature can tell. The XFX heatsink has reduced airflow directly near the GPU compared to the Sapphire due to the fin orientation (the Nitro+ has long fins that direct the heat out of the case and to the back of the card). In addition, the Nitro+ redirects some of the air from the fans around the back of the card under the back plate, which helps with temperatures quite a bit (I set a 120mm thin fan to blow on the back of the XFX and it dropped 3C under Furmark).

However, it greatly depends on your target temperature. If you want 60C temps on the GPU, the Sapphire is the way to go. I could crank the fans up and keep the GPU at 65C during FurMark at 1250Mhz with +50% power limit.. and the noise was completely tolerable. During gaming, it would be even quieter. The XFX cannot pull that feat without being obnoxious... though it can push the temperatures all the way down to 60C, with 65C on the VRMs.

I think both companies have had bad TIM applications for some people, but I seem to have lucked out both times.
post #748 of 4362
Quote:
Originally Posted by looncraz View Post

Just like the 7970 is irrelevant today? Oh, wait... its level of performance has only just now been relegated to the low end discrete graphics... and it's capable of pretty decent 1080p gaming, albeit with compromises.

GTX 1060/RX 480 performance levels will be useful in three years. RX 480 more so, without doubt.
Polaris has 15% higher IPC for each SP that is not being exploited. AMD has already shown that they know how to use drivers to reduce memory bandwidth requirements, which is holding back Polaris immensely, so it would not be out of the realm of expectations to see 15% or more in improvements in the next two years - which is not terribly different than what was seen with Hawaii. In fact, Hawaii saw some specific game improvements in the area of 30% (BF4, especially). Polaris has already seen performance improvements in just the few weeks since its release.
AMD encoding and rendering is superior to nVidia, in my experience with it, though you have specific programs which did not use OpenCL or had poor implementations. I use a whole host of applications for my video needs and every single one of them works better with AMD cards, and one even has AMD/ATi branded add-ons (Cyberlink PowerDirector).

The real problem with nVidia is that you have to buy the high end hardware to get the compute power (if you get it even then). AMD offers it across its lineup.
I assume you're talking about the thread block scheduler? That was still incapable of async compute. nVidia's context switches were too costly and numerous other design issues prevented heavy graphical and computational workloads at the same time, which is what DX12 async compute requires.
Yes, but wider buses and more net bandwidth. nVidia is still a step ahead for memory compression, but AMD is making quick headway. nVidia is also ahead for efficiency, but AMD is catching up.. and nVidia is losing IPC for a little extra frequency, whereas AMD gained both (then squandered the IPC gain by a too narrow memory bus).
The nVidia cards I have are the same for multi-monitor idle. Still, this new generation of memory doesn't seem to actually use much power unless it is being used. Once the updated Afterburner is out and I can underclock the memory, I will do some thorough test on the matter. From the looks of Wattman, AMD expects to start stepping the memory frequency just like the GPU frequency. So long as we can customize this even with multiple monitors, this problem will vanish. Usually, though, I just set 2D clocks with Afterburner and use my keyboard's macro keys for four different clock and power configurations.
Yes it does. New tech, fancy interface, lots of new features, and a blindingly fast release cycle... you're gonna see those issues. But it's also important to remember that most of these drivers are actually beta drivers, but people love to tinker tongue.gif
100k chips is very respectable volume (but over what time period?) for a new node with the largest chip ever made on that node (yes, Polaris 10 is the biggest chip ever made on 14nm LPP). After the dies are created, they have to be packaged, then sent to the partners, who then have to build cards with them. nVidia pulled a fast one on everybody and released GTX1060 long before it was supposed to be released. This schedule caused AMD to sit back and make more changes to Polaris 10 prior to release. RX 480's revision is C7. RX 460's is CF. GTX 1080 is A1.

It's positively amazing AMD got Polaris out when they did. nVidia's feat, on the other hand, is actually more of an embarrassment. They've been talking up Pascal forever... and it's just a higher clocking version of Maxwell 2 with a few other minor mods. Pascal's entire benefit, beyond clock speeds, is derived from 16nm FF.
Europe has always had problems with computer hardware pricing. I spent a couple of months there and was appalled by the pricing - and the availability. This is nothing new. High energy prices in Europe can also make 40 or 50W more usage something that means more than it does here... where my electricity runs me $100/month for my 1700 Sq Ft home with central heat and air.

The RX 480, however, comes into stock frequently. Prices are inflated right now because people are more than willing to pay them. I usually like to use Amazon's sales to get an idea of what cards are selling most, but they haven't been stocking these cards well at all compared to NewEgg. I bought mine from Jet.com.

https://www.nowinstock.net/computers/videocards/amd/rx480/
Yes, the mining can have a real negative impact... however the failure rates aren't really any higher, the cards have all manner of built-in protections. They might fail faster, but the end rate won't be much larger - those cards were likely defective from the start and would have needed and RMA anyway.
They can mine, sure, but not well. AMD performance can be several times higher, which is why they are preferred. Miners are looking for the best return on their investment, there's zero brand loyalty going on there - just math.
Power efficiency is a benefit from less capable hardware like that in the GTX 1060, no doubt.
That's tough, I've had the Sapphire Nitro+ RX 470 and the XFX RX 480 GTR in hand and did a lot of testing... the Nitro+ was noticeably more quiet, but slightly warmer.. even after I undervolted. The total heat capacity of the XFX heatsink should be notably higher - more so than the GPU temperature can tell. The XFX heatsink has reduced airflow directly near the GPU compared to the Sapphire due to the fin orientation (the Nitro+ has long fins that direct the heat out of the case and to the back of the card). In addition, the Nitro+ redirects some of the air from the fans around the back of the card under the back plate, which helps with temperatures quite a bit (I set a 120mm thin fan to blow on the back of the XFX and it dropped 3C under Furmark).

However, it greatly depends on your target temperature. If you want 60C temps on the GPU, the Sapphire is the way to go. I could crank the fans up and keep the GPU at 65C during FurMark at 1250Mhz with +50% power limit.. and the noise was completely tolerable. During gaming, it would be even quieter. The XFX cannot pull that feat without being obnoxious... though it can push the temperatures all the way down to 60C, with 65C on the VRMs.

I think both companies have had bad TIM applications for some people, but I seem to have lucked out both times.

Dude you are serious gamer/tester. Which brand of graphic card would you suggest for us now for best durability?
post #749 of 4362
I just read this "there is a fan bug on AIB cards" and i am interested at what is it ... i bought a RX 470 Nitro + OC (1260 boost clock)

I am raiding tonight (playing WoW) and the card is running at 53C with the fan running at 15% arround 700rpm i can't hear it biggrin.gif

it shows the card at 300 mhz thou ... strange nah that's when alt tab -ing i guess biggrin.gif
Edited by slavovid - 8/21/16 at 12:28pm
Home PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
FX 8350  M5A99FX PRO R2.0 GTX 650 2x4GB DDR3 2400 mhz 11-13-13-35 
Hard DriveCoolingOSPower
Several Thermalright Inferno IFX-14 + 2x ML 120's + 2x... Windows 10 Cooler Master B700 v.2 
  hide details  
Reply
Home PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
FX 8350  M5A99FX PRO R2.0 GTX 650 2x4GB DDR3 2400 mhz 11-13-13-35 
Hard DriveCoolingOSPower
Several Thermalright Inferno IFX-14 + 2x ML 120's + 2x... Windows 10 Cooler Master B700 v.2 
  hide details  
Reply
post #750 of 4362
Quote:
Originally Posted by looncraz View Post

Warning: Spoiler! (Click to show)
Just like the 7970 is irrelevant today? Oh, wait... its level of performance has only just now been relegated to the low end discrete graphics... and it's capable of pretty decent 1080p gaming, albeit with compromises.

GTX 1060/RX 480 performance levels will be useful in three years. RX 480 more so, without doubt.
Polaris has 15% higher IPC for each SP that is not being exploited. AMD has already shown that they know how to use drivers to reduce memory bandwidth requirements, which is holding back Polaris immensely, so it would not be out of the realm of expectations to see 15% or more in improvements in the next two years - which is not terribly different than what was seen with Hawaii. In fact, Hawaii saw some specific game improvements in the area of 30% (BF4, especially). Polaris has already seen performance improvements in just the few weeks since its release.
AMD encoding and rendering is superior to nVidia, in my experience with it, though you have specific programs which did not use OpenCL or had poor implementations. I use a whole host of applications for my video needs and every single one of them works better with AMD cards, and one even has AMD/ATi branded add-ons (Cyberlink PowerDirector).

The real problem with nVidia is that you have to buy the high end hardware to get the compute power (if you get it even then). AMD offers it across its lineup.
I assume you're talking about the thread block scheduler? That was still incapable of async compute. nVidia's context switches were too costly and numerous other design issues prevented heavy graphical and computational workloads at the same time, which is what DX12 async compute requires.
Yes, but wider buses and more net bandwidth. nVidia is still a step ahead for memory compression, but AMD is making quick headway. nVidia is also ahead for efficiency, but AMD is catching up.. and nVidia is losing IPC for a little extra frequency, whereas AMD gained both (then squandered the IPC gain by a too narrow memory bus).
The nVidia cards I have are the same for multi-monitor idle. Still, this new generation of memory doesn't seem to actually use much power unless it is being used. Once the updated Afterburner is out and I can underclock the memory, I will do some thorough test on the matter. From the looks of Wattman, AMD expects to start stepping the memory frequency just like the GPU frequency. So long as we can customize this even with multiple monitors, this problem will vanish. Usually, though, I just set 2D clocks with Afterburner and use my keyboard's macro keys for four different clock and power configurations.
Yes it does. New tech, fancy interface, lots of new features, and a blindingly fast release cycle... you're gonna see those issues. But it's also important to remember that most of these drivers are actually beta drivers, but people love to tinker tongue.gif
100k chips is very respectable volume (but over what time period?) for a new node with the largest chip ever made on that node (yes, Polaris 10 is the biggest chip ever made on 14nm LPP). After the dies are created, they have to be packaged, then sent to the partners, who then have to build cards with them. nVidia pulled a fast one on everybody and released GTX1060 long before it was supposed to be released. This schedule caused AMD to sit back and make more changes to Polaris 10 prior to release. RX 480's revision is C7. RX 460's is CF. GTX 1080 is A1.

It's positively amazing AMD got Polaris out when they did. nVidia's feat, on the other hand, is actually more of an embarrassment. They've been talking up Pascal forever... and it's just a higher clocking version of Maxwell 2 with a few other minor mods. Pascal's entire benefit, beyond clock speeds, is derived from 16nm FF.
Europe has always had problems with computer hardware pricing. I spent a couple of months there and was appalled by the pricing - and the availability. This is nothing new. High energy prices in Europe can also make 40 or 50W more usage something that means more than it does here... where my electricity runs me $100/month for my 1700 Sq Ft home with central heat and air.

The RX 480, however, comes into stock frequently. Prices are inflated right now because people are more than willing to pay them. I usually like to use Amazon's sales to get an idea of what cards are selling most, but they haven't been stocking these cards well at all compared to NewEgg. I bought mine from Jet.com.

https://www.nowinstock.net/computers/videocards/amd/rx480/
Yes, the mining can have a real negative impact... however the failure rates aren't really any higher, the cards have all manner of built-in protections. They might fail faster, but the end rate won't be much larger - those cards were likely defective from the start and would have needed and RMA anyway.
They can mine, sure, but not well. AMD performance can be several times higher, which is why they are preferred. Miners are looking for the best return on their investment, there's zero brand loyalty going on there - just math.
Power efficiency is a benefit from less capable hardware like that in the GTX 1060, no doubt.
That's tough, I've had the Sapphire Nitro+ RX 470 and the XFX RX 480 GTR in hand and did a lot of testing... the Nitro+ was noticeably more quiet, but slightly warmer.. even after I undervolted. The total heat capacity of the XFX heatsink should be notably higher - more so than the GPU temperature can tell. The XFX heatsink has reduced airflow directly near the GPU compared to the Sapphire due to the fin orientation (the Nitro+ has long fins that direct the heat out of the case and to the back of the card). In addition, the Nitro+ redirects some of the air from the fans around the back of the card under the back plate, which helps with temperatures quite a bit (I set a 120mm thin fan to blow on the back of the XFX and it dropped 3C under Furmark).

However, it greatly depends on your target temperature. If you want 60C temps on the GPU, the Sapphire is the way to go. I could crank the fans up and keep the GPU at 65C during FurMark at 1250Mhz with +50% power limit.. and the noise was completely tolerable. During gaming, it would be even quieter. The XFX cannot pull that feat without being obnoxious... though it can push the temperatures all the way down to 60C, with 65C on the VRMs.

I think both companies have had bad TIM applications for some people, but I seem to have lucked out both times.
Tahiti 280x, had it got rid of it, was slow for at least a year for 1920x1200px high settings not even very high or max. Usable but way past it's time since it's a very old GCN 1 chip not even being supported anymore as the last GCN supported is 2 = 1.1.
Where did AMD show some magic 15% that is not being used ATM?
AMD lacks encoding support in many apps and only recently may have updated the SDK so that devs can use the new Polaris features at all. OBS is still a hack job when it comes to using AMD encoding and none other really support it or are worth considering to use.
3D renderers often have better CUDA implementation since AMD OpenCL compiler still seems to be a pile of poo after all the years frown.gif And devs don't want to deal with it's issues. Sure one can pay for a good renderer or just pay a farm but for the lower budget / often AMD users there isn't much at all.

The pricing isn't awful in EU when they actually ship enough units to the distributors and retailers. When they don't the prices go up and up to the point where a custom 480 costs almost as much as a 1070...
The demand may be high but there is almost no supply at all and that's the problem with Polaris since launch.

Almost no cards have VRAM voltage unlocked. And changing all the various 2D states was not possible on any AMD card so far. Not without some hack override.
Quote:
Originally Posted by slavovid View Post

I just read this "there is a fan bug on AIB cards" and i am interested at what is it ... i bought a RX 470 Nitro + OC (1260 boost clock)

I am raiding tonight (playing WoW) and the card is running at 53C with the fan running at 15% arround 700rpm i can't hear it biggrin.gif

it shows the card at 300 mhz thou ... strange nah that's when alt tab -ing i guess biggrin.gif
Some drivers messed up the temperature target which is seen in all early reviews.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: AMD/ATI
Overclock.net › Forums › Graphics Cards › AMD/ATI › [Official] Polaris Owners Club