Overclock.net › Forums › Graphics Cards › AMD/ATI › [Official] Polaris Owners Club
New Posts  All Forums:Forum Nav:

[Official] Polaris Owners Club - Page 76

post #751 of 4341
Quote:
Originally Posted by espn View Post

Dude you are serious gamer/tester. Which brand of graphic card would you suggest for us now for best durability?

Yeah, I tinker a lot. It's really funny because I don't do much gaming - I just want to enjoy it while I do it rolleyes.gif I am a programmer, by trade, and I sometimes need to clear my mind - I find jumping into BF4 and blowing people up (or getting blown up) is a great way of doing that. And I love the Hitman series thumb.gif

In any event, there's no clear answer. I've had bad cards from every brand and great cards from every brand. If you're looking to buy right now, I strongly suggest this... for obvious reasons thumb.gif. That card has been extremely good to me, easily the best I've owned.

So, that's a Gigabyte card - and I've only had two of those, but they've both been decent, but that one is just great. I've had two ASUS cards (5850 and R9 290), both of which had to be returned due to failures, but that's anecdotal, at best. Sapphire, though, has made most of my favorite cards. My 7870XT, which I use in my HTPC, is a Sapphire. It had some coil whine when it was new, but I managed to get rid of it.

In fact, speaking of coil whine, Sapphire is probably the best choice these days, with their Black Diamond Chokes. Their fans are phenomenal as well, but they sometimes skimp on the meat for the heatsink.

In the end, it really depends on what exact model you're considering.
post #752 of 4341
Quote:
Originally Posted by slavovid View Post

I just read this "there is a fan bug on AIB cards" and i am interested at what is it ... i bought a RX 470 Nitro + OC (1260 boost clock)

I am raiding tonight (playing WoW) and the card is running at 53C with the fan running at 15% arround 700rpm i can't hear it biggrin.gif

it shows the card at 300 mhz thou ... strange nah that's when alt tab -ing i guess biggrin.gif

If it's what I'm thinking of, it's where the fan speeds can creep if your temperatures dive then increase again. I've been meaning to chart what happens, but it's pretty simple:

Temperature target: 70C

Scenario:
GPU starts to warm up, and passes 70C in-game, fans spin up to 45%.
GPU cools to 65C during cut-scene, fan slowly begins to slow, at 42% the game resumes... the cycle repeats:
GPU heats to 70C, fan spins to 55% (quickly)
GPU cools to 64C, fan slows to 52%.
GPU heats to 72C, fan spins to 57%, cooling GPU to 70C.
GPU cools to 64C, fan slows to 54%.
GPU heats to 68C, fan spins to 58%.
GPU cools to 55C, fan slows to 55%.
GPU heats to 65C, fans spins to 60%.

And that cycle continues with differing numbers for how long the given temperature was maintained. The easiest way to replicate is to run Furmark stress test in a window with a +50% power limit until the fan seems to have leveled out, then minimize Furmark for five seconds or so, then bring it back up. Rinse and repeat.

The situation can be easily controlled by setting a reasonably small range for the fan speeds and a proper temperature target (75C or 80C is plenty good enough, unless you have a heat-sensitive overclock). Since you know the fans are going to end up at, say, 1800RPM, set your baseline around 1700RPM and your target around 2300RPM. That's what I do, and it works just fine.

Sapphire and XFX have bad defaults, but they are clearly concerned with a much broader set of usage environments than any of us. My house is nice and cold and my case very well ventilated, so I can get away with 2200RPM at 75C... while running Furmark. Which is quite decent.
post #753 of 4341
Quote:
Originally Posted by JackCY View Post

Tahiti 280x, had it got rid of it, was slow for at least a year for 1920x1200px high settings not even very high or max. Usable but way past it's time since it's a very old GCN 1 chip not even being supported anymore as the last GCN supported is 2 = 1.1.

Not sure what you're talking about, R9 280X, which is the 7970, is still supported with the latest Crimson drivers. My 7870XT's driver is 16.7.3... All GCN GPUs are still in the primary support branch. Pre GCN GPUs are in the legacy branch.
Quote:
Originally Posted by JackCY View Post

Where did AMD show some magic 15% that is not being used ATM?



If you do the math, however, the RX 470 and RX 480 are running exactly how you would expect a down-scaled, but overclocked R9 290/X to run. So that 15% per CU improvement is nowhere to be found. The impact of the extra caches is used to soften the blow of the reduced bandwidth (as is the memory compression, of course), but it that can't overcome the penalty fully. Linear improvements with memory bandwidth tells us exactly where the bottleneck resides in extracting that IPC. AMD has, historically, just thrown a wider bus to solve the issue, which means they haven't spent an enormous amount of time reducing bandwidth per CU requirements. That has now changed.
Quote:
Originally Posted by JackCY View Post

AMD lacks encoding support in many apps and only recently may have updated the SDK so that devs can use the new Polaris features at all. OBS is still a hack job when it comes to using AMD encoding and none other really support it or are worth considering to use.

You mean those apps lack support for proper vendor-neutral APIs for encoding. AMD/ATi has generally been well ahead on the encode front, hardware-wise, but didn't get heavily involved on the software side. It has been said, but I'll say it again: nVidia is a software company that just happens to make GPUs. AMD is a hardware company that is forced to make software to work with their products.
Quote:
Originally Posted by JackCY View Post

3D renderers often have better CUDA implementation since AMD OpenCL compiler still seems to be a pile of poo after all the years frown.gif And devs don't want to deal with it's issues. Sure one can pay for a good renderer or just pay a farm but for the lower budget / often AMD users there isn't much at all.

Adobe has finally learned their lesson with poor OpenCL support, but I haven't tested it as I hate Adobe products with a passion.
Quote:
Originally Posted by JackCY View Post

The pricing isn't awful in EU when they actually ship enough units to the distributors and retailers. When they don't the prices go up and up to the point where a custom 480 costs almost as much as a 1070...
The demand may be high but there is almost no supply at all and that's the problem with Polaris since launch.

It is hard to judge how good the supply is or isn't at this point, it's only clear that AMD can't keep up with demand, which has caused prices to increase, which hasn't hampered demand enough to slow sales...
Quote:
Originally Posted by JackCY View Post

Almost no cards have VRAM voltage unlocked. And changing all the various 2D states was not possible on any AMD card so far. Not without some hack override.
Some drivers messed up the temperature target which is seen in all early reviews.

Yeah, VRAM voltage is pretty locked down, it seems. I think the voltage listed for the memory in Crimson actually sets a minimum GPU voltage, as if you set it higher, the GPU voltage increases... in any event, the memory modules use 1.5V or something, and the Crimson entry is 1000mv, so who knows?

My GTX 560 had horrible problems with dual monitors when it came to its clock frequencies, and there are reports galore for people with GTX 970 having the same sorts of problems (stuck clocks). That's not how AMD behaves, they simply keep the memory clocks at full tilt. It just needs an alternative clock for when using multiple monitors when the GPU isn't being strained. 625Mhz is the figure I've settled on for dual monitors, but 750Mhz may be needed for triple monitors. It is sad that it hasn't been fixed by AMD directly, but Afterburner and other programs can take care of that for you pretty easily... and tweaking is fun thumb.gif

Also, the driver isn't messing up the temperature target, AFAIK, it's just AIBs selecting absurdly low temperature targets so they place lower in the charts...
post #754 of 4341
looncraz - Have you seen any evidence of the primitive discard engine existing yet? Other than performing like a 390-390x even though that card has twice the ROPs, I can't really see any evidence of it. Performance should outstrip the 390x in heaven if high tessellation and msaa 8x are really where it shines. (According to AMD's slide.) There was talk of it discarding polygons with an area of zero. There should be tons of those in heaven at extreme preset... Heck any poorly optimized game should have lots of those in any frame. If it really is doing more than z-buffer occlusion culling, should''t performance be more varied than seeming to fall inline with other GCN parts? I'm wondering if they're having trouble getting the function to not impact visual fidelity and it's taking time to enable in driver. Or am I just crazy and all of this has all ready been explained. Maybe I'm remembering the initial press packet incorrectly. It just seems odd that they would list a feature that doesn't yet seem to exist. Maybe I'm just being naive about a marketing slide. I've been waiting and waiting for more info on this. Have you found anything yet? It would be like hardware doing the hard part of game optimization for you on the fly if it ever works like they said.
post #755 of 4341
Quote:
Originally Posted by greytoad View Post

looncraz - Have you seen any evidence of the primitive discard engine existing yet? Other than performing like a 390-390x even though that card has twice the ROPs, I can't really see any evidence of it. Performance should outstrip the 390x in heaven if high tessellation and msaa 8x are really where it shines. (According to AMD's slide.) There was talk of it discarding polygons with an area of zero. There should be tons of those in heaven at extreme preset... Heck any poorly optimized game should have lots of those in any frame. If it really is doing more than z-buffer occlusion culling, should''t performance be more varied than seeming to fall inline with other GCN parts? I'm wondering if they're having trouble getting the function to not impact visual fidelity and it's taking time to enable in driver. Or am I just crazy and all of this has all ready been explained. Maybe I'm remembering the initial press packet incorrectly. It just seems odd that they would list a feature that doesn't yet seem to exist. Maybe I'm just being naive about a marketing slide. I've been waiting and waiting for more info on this. Have you found anything yet? It would be like hardware doing the hard part of game optimization for you on the fly if it ever works like they said.

I have seen some evidence of it, yes, but its impact definitely varies...

In Heaven, the per-clock score difference between the R9 290 and the RX 480 is just 25 points when tessellation is maxed (1.7%). With no tessellation, the R9 290 scores 226 points higher at the same clock speed - or about 10.2%.

Most likely AMD can tune the discard, but that's nearly 10% better performance in overall FPS due to the primitive discard.

In theory, the largest difference to be seen should be in Crysis 2, which I don't have... but I just bought for $5, so I'll test and compare to known R9 290 results (I have packed my R9 290 up, now puke.gif).
post #756 of 4341
Quote:
Originally Posted by looncraz View Post

Warning: Spoiler! (Click to show)
Not sure what you're talking about, R9 280X, which is the 7970, is still supported with the latest Crimson drivers. My 7870XT's driver is 16.7.3... All GCN GPUs are still in the primary support branch. Pre GCN GPUs are in the legacy branch.


If you do the math, however, the RX 470 and RX 480 are running exactly how you would expect a down-scaled, but overclocked R9 290/X to run. So that 15% per CU improvement is nowhere to be found. The impact of the extra caches is used to soften the blow of the reduced bandwidth (as is the memory compression, of course), but it that can't overcome the penalty fully. Linear improvements with memory bandwidth tells us exactly where the bottleneck resides in extracting that IPC. AMD has, historically, just thrown a wider bus to solve the issue, which means they haven't spent an enormous amount of time reducing bandwidth per CU requirements. That has now changed.
You mean those apps lack support for proper vendor-neutral APIs for encoding. AMD/ATi has generally been well ahead on the encode front, hardware-wise, but didn't get heavily involved on the software side. It has been said, but I'll say it again: nVidia is a software company that just happens to make GPUs. AMD is a hardware company that is forced to make software to work with their products.
Adobe has finally learned their lesson with poor OpenCL support, but I haven't tested it as I hate Adobe products with a passion.
It is hard to judge how good the supply is or isn't at this point, it's only clear that AMD can't keep up with demand, which has caused prices to increase, which hasn't hampered demand enough to slow sales...
Yeah, VRAM voltage is pretty locked down, it seems. I think the voltage listed for the memory in Crimson actually sets a minimum GPU voltage, as if you set it higher, the GPU voltage increases... in any event, the memory modules use 1.5V or something, and the Crimson entry is 1000mv, so who knows?

My GTX 560 had horrible problems with dual monitors when it came to its clock frequencies, and there are reports galore for people with GTX 970 having the same sorts of problems (stuck clocks). That's not how AMD behaves, they simply keep the memory clocks at full tilt. It just needs an alternative clock for when using multiple monitors when the GPU isn't being strained. 625Mhz is the figure I've settled on for dual monitors, but 750Mhz may be needed for triple monitors. It is sad that it hasn't been fixed by AMD directly, but Afterburner and other programs can take care of that for you pretty easily... and tweaking is fun thumb.gif

Also, the driver isn't messing up the temperature target, AFAIK, it's just AIBs selecting absurdly low temperature targets so they place lower in the charts...
Check OCN AMD Q&A they talk about the GCN 1 support ending overall.
Sure drivers exist but that's not the only thing that needs to support them. And from my experience the new drivers were a mess on 280x but the card was probably faulty in some way.

Some +15% shown in a presentation is more about how they improved something compared to previous generation. Not a hidden 15% performance.

I have no problem using Adobe products even with iGPU, maybe video editing etc. does use the GPUs more but photo editing nah.

The Wattman memory voltage is probably the GPU memory controller voltage or something it's probably not VRAM voltage. It should be around 1.5V yes for VRAM.

Dunno greytoad, I haven't seen anymore info on it anywhere.
post #757 of 4341
I have crysis 2 but I don't have a tool to average the fps accurately. Just riva tuner. Would you suggest a way to bench it that would allow me to meaningfully compare it to the 290?

There's no way to know if that 10.2 percent is because of the primitive discard or other ROP enhancements, but that is pretty strong evidence. Thanks.
post #758 of 4341
Quote:
Originally Posted by JackCY View Post

Check OCN AMD Q&A they talk about the GCN 1 support ending overall.

Quite literally the only thing I've found is talking about pre-GCN support being ended. And that just happened in November. Most GCN cards use the same ABI, so continuing support for GCN 1.0 is basically free. They probably aren't going to bother making many more performance improvements, but GCN 1.0 cards improved some 30% since their release - you can't really ask for much more than that.
Quote:
Originally Posted by JackCY View Post

Some +15% shown in a presentation is more about how they improved something compared to previous generation. Not a hidden 15% performance.

Perhaps you aren't understanding - that 15% improvement is nowhere to be found. That's because other factors are preventing it from being realized - factors that will almost certainly be realized through driver updates. Granted, those gains will often be limited to a few games at a time, as AMD will most likely need to profile each game to find out what is holding back the performance. If you increase the memory bandwidth to more closely match the R9 290/X capabilities, the RX 480 will trounce those at the same clock speed. At the moment, we are seeing the RX 480 losing, per clock, against the R9 290 - sometimes by well more than the 11% you would expect if the CUs were identical.
Quote:
Originally Posted by JackCY View Post

I have no problem using Adobe products even with iGPU, maybe video editing etc. does use the GPUs more but photo editing nah.

They work fine with OpenCL and AMD cards, I just hate how they are designed. Simple effects in Premiere require you to have two or three other programs to accomplish. By the time you made a transition into your next scene with Premiere, I can have the video done with PowerDirector or most other editors. Of course, Premiere has its place - just not in my workflow.
Quote:
Originally Posted by JackCY View Post

The Wattman memory voltage is probably the GPU memory controller voltage or something it's probably not VRAM voltage. It should be around 1.5V yes for VRAM.

No doubts. Considering how well this card scales with memory clocks, I can't wait until someone gets around to doing a volt mod on the memory VRM and pushes a little extra voltage and gets past the 2.25Ghz Crimson limit. I'm not interested in doing, but I want to know what can be accomplished. For my R9 290, going from 1250hz to 1500Mhz was worth 3~5%... totally not worth it. That could be about 20% on the RX 480.
post #759 of 4341
Quote:
Originally Posted by greytoad View Post

I have crysis 2 but I don't have a tool to average the fps accurately. Just riva tuner. Would you suggest a way to bench it that would allow me to meaningfully compare it to the 290?

There's no way to know if that 10.2 percent is because of the primitive discard or other ROP enhancements, but that is pretty strong evidence. Thanks.

FRAPS is the go-to. It has its own built-in "benchmark" (it just creates a log in "C:\FRAPS\Benchmarks"). You have to create a consistent run and an in-game shortcut combo (I use Shift+F12 for moving the overlay, and Shift+F11 to start the benchmark). When the benchmark begins, the counter will change colors then disappear. I usually do at least two runs per setting.

I have been trying to find sound usable benchmarks for Crysis 2.. as in not early benchmarks.

That 10.2% is almost certainly the primitive discard, the same relationship holds true with Tessmark. In fact, there's a pretty decent chance that AMD has tweaked the default tessellation profiles for the RX 480 which is hiding some of the gain.
post #760 of 4341
Quote:
Originally Posted by JackCY View Post

Check OCN AMD Q&A they talk about the GCN 1 support ending overall.
Sure drivers exist but that's not the only thing that needs to support them. And from my experience the new drivers were a mess on 280x but the card was probably faulty in some way.

The new drivers are still not great even on the rx480. The noted wattman crash every third time it's accessed is a little annoying. Next WHQL drivers should be telling. I expect most of these issues to be fixed by then. I'm also, like looncraz hopeful about additional performance gains as the driver matures.

On the GCN 1.0 support ending. I haven't read about that. You mean optimization support for the 7000 series? As of may they were still planning on adding vulcan support for Sothern Islands to their linux driver, so not sure. I'll look for what you're saying but like looncraz I haven't seen anything. I never owned that card. I'm waiting for my 1080p tv to die to move to 4k. I'm 2 years past the MTBF so anytime now. 2 480x should be ok but not great in 4k for a couple of years or more if I do move to 4k soon. I'd prefer to wait until next year. I just did a bathroom remodel for my dad to make it wheelchair accessible. Don't have the money right now. But I'm ok with 30+ FPS medium to high settings. I start to get annoyed below 24 fps.

@ looncraz I haven't done any video editing or compositing in many years. I had heard that After Effects didn't have great OpenCL support and it was hit or miss with plugins. What do you use for compositing-effects now? I've never used Power Director. I used to use Premier/ AfterEffects/ Vegas and maybe some final cut. We're talking back in the 2007-2008 range so I'm out of touch. I considered a Nvidia card for better AfterEffects/premier support this generation but I don't have any projects planned. I don't think JackCY knows how quickly 200+ layers in AfterEffects can bring a system to it's knees without acceleration more than an igpu.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: AMD/ATI
Overclock.net › Forums › Graphics Cards › AMD/ATI › [Official] Polaris Owners Club