Overclock.net banner

41 - 60 of 632 Posts

·
Registered
Joined
·
1,480 Posts
Quote:
Originally Posted by Mookster View Post

It's also a double shrink, at 14nm FF. You can get an AIB 8GB R9 390 for $259 on Newegg right now.

Sure, the TDP. But, die shrinks have always meant better TDP, price, and performance in the past. At least, it did before we took a 5 year hiatus on shrinks. It's a weak release.. especially considering it's a long overdue double shrink.
GloFlo/Samsung's 14nm is just 20nm with finfet.
 

·
Banned
Joined
·
6,565 Posts
Quote:
Originally Posted by Mookster View Post

I don't recall any issues like that. I recall the reference 290 coming with new software to throttle clocks as it approached thermal limits, but never problems with power limits like the 480 appears to have.

I'd prefer the 480, of course, because of the TDP. Nevertheless, it's not hyper-critical to call the 480 a bit of a disappointment for not outperforming the 390 by a large margin considering it's roughly the same price. Die shrinks have always brought improvements to price, performance and TDP. This time around, we're only getting an improvement to TDP. No price improvement, no performance improvement.

I'm an optimist, but that's a downgrade from what we're used to seeing from die shrinks. I'm sorry, it just is.
I have been following the issue a lot. All "enthusiast" grade AMD gpus have demonstrated this double-edged pattern as of yet. Thus, I don't object when people have trouble adopting the gpus because you either have to ignore all power restrictions and act as if you had unrestricted cooling potential to let Tahiti/Hawaii gpus shine through, or you would rest on your laurels being restricted by the power threshold limitations all the time. No wonder they sold so bad since they were never meant to be mainstream components in the first place. Pitcairn, Tonga and Polaris were, on the other hand.
For starters, you can easily hit 360 watts with a 390(from "Tom's Review of the subject") however you cannot do the same with 480 - it has three folds more power delivery than its overall power usage. You literally have to overclock it past its air cooler's potential in order to reach phase limitation levels(1.35v voltage potential isn't even officially available) and that still isn't the total of what the card can give.
 

·
Registered
Joined
·
9 Posts
I don't think this had been brought forward but this is form the FAQ on the steam forums:
Quote:
Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.
Hopefully we do get some new drivers from Nvidia soon. I also found on my 1080 that Gysnc is longer changing my screen's refresh rate, I'll just stick to OGL for now.
 

·
PC Gamer
Joined
·
1,163 Posts
In breaking news AMD sees performance increase going from a API they suck at to a API they paid to develop nothing more at 9pm.
thumb.gif
 

·
Premium Member
Joined
·
2,813 Posts
Quote:
Originally Posted by boot318 View Post

GloFlo/Samsung's 14nm is just 20nm with finfet.
For all intents and purposes, a double shrink. It's not as if FinFet should be negated when you're comparing the last node to the new one.

It's kind of moot to treat this as anything other than a double shrink, which is why everyone is conceding to call these 14nm/16nm nodes instead of 20nm.
Quote:
Originally Posted by mtcn77 View Post

I have been following the issue a lot. All "enthusiast" grade AMD gpus have demonstrated this double-edged pattern as of yet. Thus, I don't object when people have trouble adopting the gpus because you either have to ignore all power restrictions and act as if you had unrestricted cooling potential to let Tahiti/Hawaii gpus shine through, or you would rest on your laurels being restricted by the power threshold limitations all the time. No wonder they sold so bad since they never were meant as mainstream components in the first place. Pitcairn, Tonga and Polaris were, on the other hand.
For starters, you can easily hit 360 watts with a 390(from "Tom's Review of the subject") however you cannot do the same with 480 - it has three folds more power delivery than its overall power usage. You literally have to overclock it past its air cooler's potential in order to reach phase limitation levels(1.35v voltage potential isn't even officially available) and that still isn't the total of what the card can give.
AMD prefers to pack more transistors into smaller dies than NV more often than not, so it's not surprising to see a steeper curve in power consumption/temperature occuring at the same time. People make the mistake of attributing this to the cooler or the power, but it's really just due to the transistor density. More power radiates out as heat instead of it's desired purpose (computing), and that heat causes even more leakage because that's how it impacts the transistors. The function of the cooler is to prevent this runaway, but the closer you pack your transisters, the less it'll be possible for any conventional cooler to work. AMD is in the spectrum where fine tuning is required, which is why you feel like you're always limited by either heat or power.

The impulse is to blame inadequate cooling or inadequate power for a lack of overclocking headroom, but the reality is that AMD is tailoring their coolers and power phase to match the designed limits of their extra-dense chips.

They know what they're doing. They responded confidently that extra power does absolutely nothing to help achieve better clocks with P10 because of the steep temperature rise at higher clocks, and I'm sure they knew that would be the result of those tightly packed transistors long before they started determining how tightly they should pack them.

It creates a bit of a "meh" situation for overclockers, but it does seem to be a more intelligent way of designing chips overall.
 

·
Consumerism 101
Joined
·
4,098 Posts
Quote:
Originally Posted by rcfc89 View Post

We need competition to bring these ridiculous prices down.
$400 Fury X?

Also, wonder how much of this performance would have been realized with better OpenGL drivers
rolleyes.gif
 

·
GPU Enthusiast
Joined
·
2,561 Posts
Quote:
Originally Posted by Mookster View Post

It's also a double shrink, at 14nm FF. You can get an AIB 8GB R9 390 for $259 on Newegg right now.

Sure, the TDP. But, die shrinks have always meant better TDP, price, and performance in the past. At least, it did before we took a 5 year hiatus on shrinks. It's a weak release.. especially considering it's a long overdue double shrink.
AFAIK, both TSMC and GloFo/Samsung 16m, amd 14m, are 20nm with FF. Neither is true 14nm or 16nm, they're both larger, so you can't really call it a double shrink; there was a tech slide in another one of the news threads comparing Intel 14nm to 16nm TSMC and 14nm Glofo/Samsung showing the size differences.
 

·
Premium Member
Joined
·
11,034 Posts
id software has done a fantastic job with Doom running Vulkan. The massive performance boost is due to 3 specific features running on top of Vulkan (which brings perf improvements by reducing CPU driver overhead).
1. Async compute
2. Shader intrinsics
3. Frame flip optimizations.

http://radeon.com/doom-vulkan/
https://community.bethesda.net/thread/54585?start=0&tstart=0

The performance of Rx 480 is amazing at 90% of GTX 1070. kudos to id software and AMD.
thumb.gif
 

·
Banned
Joined
·
6,565 Posts
Quote:
Originally Posted by Mookster View Post

For all intents and purposes, a double shrink. It's not as if FinFet should be negated when you're comparing the last node to the new one.

It's kind of moot to treat this as anything other than a double shrink, which is why everyone is conceding to call these 14nm/16nm nodes instead of 20nm.
AMD prefers to pack more transistors into smaller dies than NV more often than not, so it's not surprising to see a steeper curve in power consumption/temperature occuring at the same time. People make the mistake of attributing this to the cooler or the power, but it's really just due to the transistor density. More power radiates out as heat instead of it's desired purpose (computing), and that heat causes even more leakage because that's how it impacts the transistors. The function of the cooler is to prevent this runaway, but the closer you pack your transisters, the less it'll be possible for any conventional cooler to work. AMD is in the spectrum where fine tuning is required, which is why you feel like you're always limited by either heat or power.

The impulse is to blame inadequate cooling or inadequate power for a lack of overclocking headroom, but the reality is that AMD is tailoring their coolers and power phase to match the designed limits of their extra-dense chips.

They know what they're doing. They responded confidently that extra power does absolutely nothing to help achieve better clocks with P10 because of the steep temperature rise at higher clocks, and I'm sure they knew that would be the result of those tightly packed transistors long before they started determining how tightly they should pack them.

It creates a bit of a "meh" situation for overclockers, but it does seem to be a more intelligent way of designing chips overall.
Being able to push the limits of the cooler is a good event, in my opinion, since the tdp limit works against gpu clocks, so an absence of peripheral units that hold back the gpu frequency sounds like an ideal solution.
For 390, 360 amperes was a hard limit, as per observations by credible members, so there just wasn't enough redundancy built into that card's power delivery. Risk taking isn't my best suit and I know how hot HD4890 got during sustained loads. Once the temperature becomes critical, the cooling deficit only grows as you are removing the same heat at the same temperature gradient while the semiconductor components get more and more leaky(Poole-Frenkel Effect). Suddenly removing twice the heat is a necessity at twice the fan speed and the gpu's alter ego turns up like a bad penny. Vroom, vroom!
mad.gif
 

·
Registered
Joined
·
781 Posts
290x/390x is 1.40x faster than the 970 and 290/390 is 1.275x faster than the 970
290x vs 780ti / 980?
They were finally able to utilize all the cores.
 

·
Registered
Joined
·
2,880 Posts
Quote:
Originally Posted by provost View Post

I think this is a good example of performance gains that can be achieved by the GPUs; a one trick pony, compared to, for example, a CPU. However, I wouldn't be surprised if Nvidia (and/or AMD?) start to "incentify" developers not to give away "free performance", since it inherently poses a threat to their business model of yesterday.... Lol
Well, it's not like it has happened before, oh wait.
Quote:
We have been following a brewing controversy over the PC version of Assassin's Creed and its support for AMD Radeon graphics cards with DirectX 10.1 for some time now. The folks at Rage3D first broke this story by noting some major performance gains in the game on a Radeon HD 3870 X2 with antialiasing enabled after Vista Service Pack 1 is installed-gains of up to 20%. Vista SP1, of course, adds support for DirectX version 10.1, among other things.
http://techreport.com/news/14707/ubisoft-comments-on-assassin-creed-dx10-1-controversy-updated
 

·
Registered
Joined
·
2,371 Posts
Quote:
Originally Posted by Slomo4shO View Post

$400 Fury X?

Also, wonder how much of this performance would have been realized with better OpenGL drivers
rolleyes.gif
Is this guy serious? Fury X is garbage outside of Doom. Gets destroyed in all other benchmarks. Suddenly Amd gets good performance on a 2 month old game on the API they developed and red loyalist are going banana's. It's quite funny to be honest. It's why I love this forum.
 

·
Irrigator of Souls
Joined
·
1,173 Posts
Quote:
Originally Posted by rcfc89 View Post

Is this guy serious? Fury X is garbage outside of Doom. Gets destroyed in all other benchmarks. Suddenly Amd gets good performance on a 2 month old game on the API they developed and red loyalist are going banana's. It's quite funny to be honest. It's why I love this forum.
It's a little something called a "big picture". It involves APIs, game developers, and GPU architectures, and not so much a single game.

Oh, and no surprise-- fortunes change over time. It's kind of how capitalism & competition should work.
 

·
Registered
Joined
·
803 Posts
An example of whats to come? It seems likely that AMD's architecture is just better suited and prepared for these new gen games with more advanced APIs. Considering how similar DX12 and Vulkan are to Mantle.
 

·
Banned
Joined
·
1,148 Posts
Fury X and other Fiji GPUs gained so much because it has a massive front end bottleneck that was significantly ameliorated with Vulkan. Look at how much the Fury X gains compared to Polaris and Hawaii.
 

·
Irrigator of Souls
Joined
·
1,173 Posts
Quote:
Originally Posted by NuclearPeace View Post

Fury X and other Fiji GPUs gained so much because it has a massive front end bottleneck. Look at how much the Fury X gains compared to Polaris and Hawaii.
Check the other thread. I went from 60-80 FPS in OpenGL to 120 - 140 FPS in Vulkan on this launch-day Hawaii at stock clocks. That went to 130 - 160 when I bumped it to 1100/1500. At 1080p mind you, but still.

If I look up at the skybox or at a wall it pegs at 200 frames per second, all on Ultra. Not bad for a 2013 chip.
 

·
Professional Proletariat
Joined
·
5,671 Posts
Quote:
Originally Posted by rcfc89 View Post

Is this guy serious? Fury X is garbage outside of Doom. Gets destroyed in all other benchmarks. Suddenly Amd gets good performance on a 2 month old game on the API they developed and red loyalist are going banana's. It's quite funny to be honest. It's why I love this forum.
So AMD developed Vulkan. Now where are all the guys that swore up and down that AMD had nothing to do with DX12 or Vulkan.

Also, I would claim all benchmarks if I were you, since all it takes is one counterexample to refute the argument, and there are already quite a few.
 
  • Rep+
Reactions: Phaethon666

·
Banned
Joined
·
6,565 Posts
Quote:
Originally Posted by infranoia View Post

Check the other thread. I went from 60-80 FPS in OpenGL to 120 - 140 FPS in Vulkan on this launch-day Hawaii at stock clocks. That went to 130 - 160 when I bumped it to 1100/1500. At 1080p mind you, but still.
Same settings as the Pascal launch demonstration. Somebody tell me what is wrong:
 

·
Irrigator of Souls
Joined
·
1,173 Posts
Quote:
Originally Posted by mtcn77 View Post

Same settings as the Pascal launch demonstration. Somebody tell me what is wrong:
No async shader render path is enabled on Pascal, that's what's wrong. I'm getting those frames easily on this OC 290x.
 
  • Rep+
Reactions: NightAntilli
41 - 60 of 632 Posts
Top