Overclock.net › Forums › Industry News › Hardware News › [VideoCardz] AMD Radeon RX 480 to cost 199 USD
New Posts  All Forums:Forum Nav:

[VideoCardz] AMD Radeon RX 480 to cost 199 USD - Page 71  

post #701 of 1250
Quote:
Originally Posted by NightAntilli View Post

Ugh. This has just turned into a storm of nonsense... Just throwing a few things out.

TDP =/= power consumption
Pascal = software async compute that uses CPU, GCN = hardware accelerated concurrent async compute + graphics.
RX 480 = GTX 980 equivalent costing $200 for 4GB and $230 for 8GB. Anyone who downplays this value is either trolling or has an irrational bias.
FL12_1 was an unnecessary feature level to cater to nVidia so that they can pretend their Maxwell cards had superior DX12 support compared to GCN while they clearly didn't.

TDP ~ average power consumption in value, will you argue with that?

Pascal = hardware async compute (dynamic load balancing/on-the-fly queue distribution), Maxwell = software async compute(static load balancing/context switching stalls), GCN = entirely different hardware implementation of async compute (admittedly the one that benefits from it the most). But otherwise, it's overrated on PC.

RX 480 = R9 290 until proven otherwise. The value is unprecedented, it's undeniable, however.

FL12_1 was present. If you write it off as "insignificant" it just means you have AMD bias, because it's about as significant as this whole async compute deal. And i wish i was kidding.
Quote:
Originally Posted by airfathaaaaa View Post

it doesnt use the cpu at all otherwise the impact would be great and it would have shown on the benches on site...
pascal is brute forcing it nothing more nothing less if it was a software solution maxwell would have seen even a bit of win since its practicly the same card..

Pascal to Maxwell is about as much of change as Polaris to Tonga/Fiji so far, actually i am willing to claim Pascal has more actual changes compared to Maxwell than Polaris compared to Tonga, until proven wrong.
post #702 of 1250
Quote:
Originally Posted by Phixit View Post

Personal attacks won't make you look any smarter, just saying.

He's a Journalist, he should be able to take it.

Removed a couple of harsh sentences - seemed too harsh

I'm just bitter - I always see the Journalists give Nvidia a free pass with respect to Maxwell. It wasn't until members here found issues with Maxwell and multi-engine did they start to acknowledge the problem.

Bringing up AMD's lack of Conservative Rasterization to somehow muddy the waters, doesn't make sense. Did AMD promise hardware suport in prion versions of GCN?

Anybow, Nvidia claimed that they'd release a driver sometime last year and not a single site has done a follow-up article. Do the millions of Maxwell users a service and investigate the matter.
Edited by MikeDuffy - 6/3/16 at 1:53pm
post #703 of 1250
Quote:
Originally Posted by lolfail9001 View Post

TDP ~ average power consumption in value, will you argue with that?

Pascal = hardware async compute (dynamic load balancing/on-the-fly queue distribution), Maxwell = software async compute(static load balancing/context switching stalls), GCN = entirely different hardware implementation of async compute (admittedly the one that benefits from it the most). But otherwise, it's overrated on PC.

RX 480 = R9 290 until proven otherwise. The value is unprecedented, it's undeniable, however.

FL12_1 was present. If you write it off as "insignificant" it just means you have AMD bias, because it's about as significant as this whole async compute deal. And i wish i was kidding.
Pascal to Maxwell is about as much of change as Polaris to Tonga/Fiji so far, actually i am willing to claim Pascal has more actual changes compared to Maxwell than Polaris compared to Tonga, until proven wrong.

average? why because it suit nvidia kind of way to "forget" about the pcie power supply everytime?

The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the CPU that the cooling system in a computer is required to dissipate in typical operation. Rather than specifying CPU's real power dissipation, TDP serves as the nominal value for designing CPU cooling systems
https://en.wikipedia.org/wiki/Thermal_design_power

so what you are saying is gcn 3 to 4 is the same as maxwell 2 to pascal ? lol you clearly havent read the white paper of p100 do you know the changes are so few and minor that is literally a die shrink
post #704 of 1250
Quote:
Originally Posted by airfathaaaaa View Post

average? why because it suit nvidia kind of way to "forget" about the pcie power supply everytime?

The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the CPU that the cooling system in a computer is required to dissipate in typical operation. Rather than specifying CPU's real power dissipation, TDP serves as the nominal value for designing CPU cooling systems
https://en.wikipedia.org/wiki/Thermal_design_power

so what you are saying is gcn 3 to 4 is the same as maxwell 2 to pascal ? lol you clearly havent read the white paper of p100 do you know the changes are so few and minor that is literally a die shrink

You forgot to bold the other part, that is actually more important. I did it for you. Now do you see where it correlates with typical average power consumption? If you don't, you are advised to repeat your Physics 101 course.

And yes, i will claim that gcn 3 to 4 is about the same as maxwell 2 to pascal until i see extensive whitepapers for both.

Because if you actually make Pascal version of "all new" Polaris slide, you will actually get about the same amount of "new" blocks, if not more.
post #705 of 1250
Quote:
Originally Posted by lolfail9001 View Post

You forgot to bold the other part, that is actually more important. I did it for you. Now do you see where it correlates with typical average power consumption? If you don't, you are advised to repeat your Physics 101 course.

And yes, i will claim that gcn 3 to 4 is about the same as maxwell 2 to pascal until i see extensive whitepapers for both.

Because if you actually make Pascal version of "all new" Polaris slide, you will actually get about the same amount of "new" blocks, if not more.
so the article says maximun on a typical system yet you are saying it average on a typical system somehow i feel that you are trying way to hard to justify something we already know for years...
post #706 of 1250
Quote:
Originally Posted by SuperZan View Post

Most definitely. A company hurting for market-share choosing to address the largest percentage of the market is so stupid. What a terrible business move. And of course, logically, releasing this card is proof that AMD's engineers are pants-on-head retarded and this is the absolute best they can do. Terrible company, terrible products, may they leave us in peace sooner rather than later.

I for one welcome our new incremental-improvement overlords with open wallet and blank cheque.

I think the problem is the performance, is going to prevent alot of people from upgrading, which is going to limit it's marketshare gains.

If this chip offers r9 390 performance, it limits purchasers to people that own cards slower than a gtx 970. Which significantly limits it's potential marketshare gains. This card is alot slower than alot of people anticipated and the price reflects this.

I don't think anyone was anticipated that it would take a dual card version of this card to challenge Nvidia top end. This is slower than any of AMD past cards vs it's nvidia competition.

As an engineering effort from what we have seen so far, polaris 10 is underwhelming. 150watts for r9 390 performance isn't that great as its way to close to a gtx 980 as far as performance per watt goes. Also considering the likely transistor count, it doesn't appear AMD improved that much in regards to it's relative position to Nvidia's products. However what is a big improvement compared to AMD past efforts is marketing. Market segmentation and positioning is turning this disappointing chip into an exciting endeavor. If this chip was priced at 300/250 instead of 230/200, the reception would have been drastically different for the performance. Although this is a marketing success in the publics eyes. AMD stockholders are not happy. This is simply too cheap a price for a mid sized 14nm finfet design. From the public reception, you would think that stock would be going up, but investors are not happy with pricing. The stock has dropped 11% since the announcement because marketshare needs decent profit margins to be successful. Nvidia's stock has not been affected in the slightest.

The marketing this time for AMD is much better this time around and is turning what is a disappointing GPU into a good one. The opposite can be said for Nvidia, what is actually a good chip vs it's predecessor, is actually being undermind by poor marketing in the form of positioning and market segmentation.

If I was a company that was making GPU products, i would take gp104, but I would take AMD marketing this time around. Nvidia screwed the pooch on it's marketing and this could hurt them.
post #707 of 1250
Quote:
Originally Posted by tajoh111 View Post

This is slower than any of AMD past cards vs it's nvidia competition.

Its competition is $200 to $250 GPUs. End of story, really.
post #708 of 1250
Quote:
Originally Posted by airfathaaaaa View Post

so the article says maximun on a typical system yet you are saying it average on a typical system somehow i feel that you are trying way to hard to justify something we already know for years...

If you can't grasp difference between
Quote:
maximum amount of heat generated

and
Quote:
maximum amount of power consumed

We have little to talk about.

And yes, i am not in a mood to get down and dirty with physics of thermal transfer et cetera, the point is that 600W power consumption spike for 1ms with otherwise perfectly stable 150W power consumption rate is not going to change TDP from 150W to 600W.

As for tajoh's post... +rep.
post #709 of 1250
Quote:
@Roy_techhwood @ryanshrout @AMDRadeon Async compute is definitely a super useful feature in DX12.

— Dan Baker (@dankbaker) June 2, 2016
Quote:
@dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Async is awesome, we gained about 3ms up to 5ms on consoles(huge for 60hz)

— Tiago Sousa (@idSoftwareTiago) June 2, 2016

Quote:
@dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Async/ AMD GPU intrinsics were key for hitting perf on consoles <3

— Tiago Sousa (@idSoftwareTiago) June 2, 2016
Quote:
@dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Plus great profilling tools on PS4/Xb1 – pc has much to improve on tools

— Tiago Sousa (@idSoftwareTiago) June 2, 2016
Quote:
@dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon we were able to fit gpu particles / tex transcoding / most post-processes

— Tiago Sousa (@idSoftwareTiago) June 2, 2016
Quote:
@idSoftwareTiago @dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Yep, same sort of gains here. Use it!

— James McLaren (@selfresonating) June 2, 2016

Quote:
@mickaelgilabert @idSoftwareTiago @dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Think we're actually at about 6~7ms@30hz

— James McLaren (@selfresonating) June 2, 2016
Quote:
@selfresonating @idSoftwareTiago @dankbaker @AndrewLauritzen @ryanshrout @Roy_techhwood @AMDRadeon Same on FC Primal ~2.5/3ms. Async FTW

— Mickael Gilabert (@mickaelgilabert) June 2, 2016

I think most games are going to be favouring AMD cards, and its probably a good idea to go AMD.
Edited by badrapper - 6/3/16 at 3:20pm
post #710 of 1250
Quote:
Originally Posted by SKYMTL View Post

I'm pretty passionate about the conversation since its completely marketing-focused and doesn't benefit end users in any way. Its actually causing folks to ignore a few really key developments within the API that benefit all users far more than ASync, from APUs all the way up to leading class GPUs.

Anyways, I'm going to remain out of the ASync conversation from here on out since its so full of FUD, trying to sift through it is impossible at this point. We are way too far down the rabbit hole....

http://wccftech.com/async-compute-praised-by-several-devs-was-key-to-hitting-performance-target-in-doom-on-consoles/

It's not benefiting "end users" on PC because no games on PC have taken advantage of it yet, and the marketshare leader hasn't had support for it, so that hasn't exactly sped things up..

Look what console devs are achieving with Async, ND have pushed it further than anyone, afaik they are putting like 30% of their engine in compute with Uncharted 4.. While i don't expect Devs to get anywhere near them on PC, i would hope that some try to support the future of rendering techniques.. Otherwise what the hell is the point of buying the best hardware if most of it's potential is wasted? Not to be overly callus, but saying Async is only marketing focused and has no benefit is completely ridiculous.. Surprised that statement comes from a "journalist". But these days i guess i shouldn't be that surprised unfortunately.. rolleyes.gif

It's laughable that the "under-powered" PS4 has put out a game (UC4) that looks nearly as good as the best PC games have to offer.. And it does it while costing $300 for the entire system. At this point I'm seriously considering rather picking up a PS Neo, instead of having to spend huge amounts of money on expensive GPU's while developers stick to an ancient API and have a bunch of people down play the future just because their preferred brand doesn't currently offer anything like the competition. This is not a short term thing, it's not going to make or break Nvidia, Volta will support it, i wonder what your stance on it will be then?

Asynchronous shading (or whatever Nvidias would be called) isn't exclusive to AMD, it's something devs have been wanting for years... And was one of the biggest design decisions Sony took with the PS4. But devs have also wanted something like Mantle/Vulkan for years too. Now they have all of them on PC and very few have put their money where their mouths are..
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
This thread is locked  
Overclock.net › Forums › Industry News › Hardware News › [VideoCardz] AMD Radeon RX 480 to cost 199 USD