Overclock.net banner
81 - 100 of 173 Posts
Quote:
Originally Posted by Offler View Post

It is quite simple to calculate real power of GPU (any) by checking of its ROPs, computing cores, and similar gpu core units, and multiply that by frequency.

I was telling people for few years now that GCN by AMD is currently ahead of time, but people were still looking on partial data, not considering whole image. Comparing R9-290x with GTX980ti has to take into consideration that Nvidia has VRAM running on 7000Mhz on stock (not sure about headroom for OC) and 96 ROPs (render output unit) while its computing core gives 5600 Gflops in single precision performance.

R9-290x has just 64 ROPs, 5000MHz on VRam, and 5600 gflops in single precision as well. And both cards are being compared, while Nvidia is considered slightly better?

No way guys, when you need 32more ROPs and a lot more VRam bandwidth to archieve similar score.
Uhm, you do realise that the 290x has 320GB/s in bandwidth while the 980ti has 336GB/s? So the 980ti has more bandwidth than the 290x. And AFAIK, the ROPs on the 290x were never fully fed, as there was simply not enough bandwidth available for them, hence why Fury x has 512GB/s but still only 64 ROPS. I find it funny that both the 290x and 980ti have exactly 5632 GFLOPS.

I love to praise AMD, but come one, stick to the facts!
Quote:
Originally Posted by iSlayer View Post

From what I saw of Hawaii/Tahiti vs. Tonga, Tonga was actually slightly less efficient somehow.
AMD was only doing small die with Tahiti because of the remaining fallout from the HD 2900 XT. With Tahiti to Hawaii to now Fiji, the small die strategy has ended. Nvidia can't go small die to match AMD's high end then release a big die to take an entire segment of the market.

AMD and Nvidia are both going to be chasing to be first out the door with a big die.
Do you really think AMD or nvidia would realistically start 14/16nm with a massive 400mm2+ die? They both know they are stuck on those nodes for at least two generations, perhaps three. And big dies are expensive in and of themselves, and 14/16nm isn't particularly cheap. Considering how far ahead AMD really is in terms of packaging, we could see the first multi-die GPUs sooner rather than later. 4 100mm2 dies are less expensive to make and yields better than a single 400mm2 die. This has been AMD's plan for years and years, and the 7000 series had the tech in place to launch it with a "HBM-like memory". AMD are aiming for a future where they make just a single GPU die, and simply link them together on chip to make larger ones. Imagine Fiji being 4 chips, each with 1024 GCN cores, 64 TMUs, 16 ROPs and a 1024bit MC.
 
  • Rep+
Reactions: Crnogorac
Quote:
Originally Posted by Dupl3xxx View Post

I find it funny that both the 290x and 980ti have exactly 5632 GFLOPS.
Depending on the work nVidia achieve higher FLOPS than AMD despite AMD cards traditionally having higher theoretical FLOPS.
Quote:
Originally Posted by Dupl3xxx View Post

Do you really think AMD or nvidia would realistically start 14/16nm with a massive 400mm2+ die? They both know they are stuck on those nodes for at least two generations, perhaps three. And big dies are expensive in and of themselves, and 14/16nm isn't particularly cheap. Considering how far ahead AMD really is in terms of packaging, we could see the first multi-die GPUs sooner rather than later. 4 100mm2 dies are less expensive to make and yields better than a single 400mm2 die. This has been AMD's plan for years and years, and the 7000 series had the tech in place to launch it with a "HBM-like memory". AMD are aiming for a future where they make just a single GPU die, and simply link them together on chip to make larger ones. Imagine Fiji being 4 chips, each with 1024 GCN cores, 64 TMUs, 16 ROPs and a 1024bit MC.
This is basically a form of SLI/XFire so I hope they have their drivers ready for this because both SLI and XFire are pretty bad compared with single card setups regardless of scaling and benchmark numbers.
 
Quote:
Originally Posted by CynicalUnicorn View Post

Fiji was pretty okay. I'm not sure why anybody expected it to totally destroy the Titan X given its specs and what we learned about GCN 1.2 previously (Tonga). But it wasn't bad. It traded 2GB for 175GB/s of bandwidth, and it traded a little performance for an integrated liquid cooler.
I never really expected it to destroy the Titan X, but it should have destroyed the GTX980 and it didn't. That is why I find it unexciting, disappointing and not sitting in my rig right now.
 
Quote:
Originally Posted by Liranan View Post

This is basically a form of SLI/XFire so I hope they have their drivers ready for this because both SLI and XFire are pretty bad compared with single card setups regardless of scaling and benchmark numbers.
Getting dramatically better though. Remember the Fury X crossfire results with almost perfect scaling?

Nvidia is working towards this also, NVLink is essentially doing the same thing for enterprise. If Nvidia doesn't have R&D burning the midnight oil to scale that down for a single AIB solution..... Well..... Yeah....
 
Quote:
Originally Posted by Volvo View Post

Until you realise a 290X equals a 980Ti in DX12.

Not so smashy smashy now, huh?

Besides, NVIDIA doesn't just smash AMD, right now they're strangling 700 series users as well because your fancy 780Ti flounders just like AMD cards when it comes to Gameworks-based games.

So NVIDIA looking cool smashing AMD? Not so.
More so looking like a bunch of jerks ruining games for people by creating exclusivity, and still looking worse at it when DX12 rolls around and you realise your top-end 980Ti is going to get crapped on by a 2 year old card.
Hopefully AMD does not go bankrupt before games start using DX12. I'd also like to see your source on the 2 fps difference thing.
 
Quote:
Originally Posted by CynicalUnicorn View Post

Don't be ridiculous! The R5 series is sure to see some rebrands like Oland and Cape Verde! :upsidedwn

But you're almost certainly right. Worst case, they do what Nvidia did during the 8/9/200 series - twice - and rebrand a node-shrunken flagship. That's reasonable improvements regardless, I suppose. The Xbox 360 went from a two-die, 90nm system with a 203W power brick at launch to a single-die, 45nm system with a 120W power brick eight years later. So... that's somethin'.
Oland wasn't a rebrand and neither was the recent M370X. AMD's lack of marketing lets them down.

A reworked Fury at 300-400mm2 with about 1.2Ghz stock clock would land them right back in the game if they are quicker to it by a few months than nvidia's new line.
 
Quote:
Originally Posted by Mygaffer View Post

People get so caught up in "their" team that they ignore what would be best for them as a consumer. It is one of my biggest pet peeves, people who put corporate interests above their own.
Many of them are shills.
Quote:
Originally Posted by SpeedyVT View Post

Market share issue is that NVidia has a much more aggressive advertising campaign. Personally green is far more attractive of a color than red, probably because our eyes can see more shades of green than any other color.
I thought it was red on green.

Nvidia I don't really see aggressive advertising from so much as better branding.
Quote:
Originally Posted by Kpjoslee View Post

Maybe they are stockholders
biggrin.gif
Yup
Quote:
Originally Posted by SpeedyVT View Post

Definitely, what's good for the consumer is never good for the stock holders. The stock market is a terrible system, at least the way we've devised it.
Eh sometimes.
Quote:
Originally Posted by mtcn77 View Post

Right on topic SJW.
thumb.gif
thumb.gif
thumb.gif
What
Quote:
Originally Posted by Offler View Post

It is quite simple to calculate real power of GPU (any) by checking of its ROPs, computing cores, and similar gpu core units, and multiply that by frequency.

I was telling people for few years now that GCN by AMD is currently ahead of time, but people were still looking on partial data, not considering whole image. Comparing R9-290x with GTX980ti has to take into consideration that Nvidia has VRAM running on 7000Mhz on stock (not sure about headroom for OC) and 96 ROPs (render output unit) while its computing core gives 5600 Gflops in single precision performance.

R9-290x has just 64 ROPs, 5000MHz on VRam, and 5600 gflops in single precision as well. And both cards are being compared, while Nvidia is considered slightly better?

No way guys, when you need 32more ROPs and a lot more VRam bandwidth to archieve similar score.
VRAM frequency != VRAM bandwidth.

And AMD actually offers more physical bandwidth.

The 980 is 4.6 GFLOPs.
Quote:
Originally Posted by Dupl3xxx View Post

Uhm, you do realise that the 290x has 320GB/s in bandwidth while the 980ti has 336GB/s? So the 980ti has more bandwidth than the 290x. And AFAIK, the ROPs on the 290x were never fully fed, as there was simply not enough bandwidth available for them, hence why Fury x has 512GB/s but still only 64 ROPS. I find it funny that both the 290x and 980ti have exactly 5632 GFLOPS.

I love to praise AMD, but come one, stick to the facts!
Do you really think AMD or nvidia would realistically start 14/16nm with a massive 400mm2+ die? They both know they are stuck on those nodes for at least two generations, perhaps three. And big dies are expensive in and of themselves, and 14/16nm isn't particularly cheap. Considering how far ahead AMD really is in terms of packaging, we could see the first multi-die GPUs sooner rather than later. 4 100mm2 dies are less expensive to make and yields better than a single 400mm2 die. This has been AMD's plan for years and years, and the 7000 series had the tech in place to launch it with a "HBM-like memory". AMD are aiming for a future where they make just a single GPU die, and simply link them together on chip to make larger ones. Imagine Fiji being 4 chips, each with 1024 GCN cores, 64 TMUs, 16 ROPs and a 1024bit MC.
Sorry I was a bit vague. I expect small dies first, but a race, same with the big dies.

I am highly skeptical of the scaling, logistics and viability of such a thing.
Quote:
Originally Posted by Mrzev View Post

Hopefully AMD does not go bankrupt before games start using DX12. I'd also like to see your source on the 2 fps difference thing.
Delusions and fantasy
biggrin.gif
 
Quote:
Originally Posted by iSlayer View Post

Delusions and fantasy
biggrin.gif
Well considering how their stock has been crashing, and they are losing a lot of market share ... yeah... they need to do something. They also have til 2019 to pay off most of their debts, so they got ~a couple years to become profitable again. If not, they will need to declare bankruptcy. So yeah... DX12 games will be out by then.
 
Quote:
Originally Posted by error-id10t View Post

So Nvidia will bring out new cards early next year and AMD will release these mid next year, that's what I'm thinking. They have caught up with DX12 apparently which looks great and they have HBM bedded in with the first Gen done and dusted now.

I think we may have competition, the nay-sayers will still say AMD sucks because I just can't see them being able to deliver these before Nvidia next Gen cards... but then I'm wondering what happens to those as doesn't AMD still have HBM vRAM rights (first dibs to Gen2)?
HBM2 isn't scheduled to be in volume production until Q2 last I read. So earliest I'd expect even a halo node shrink card would be April or May, from either GPU company. I guess we may see Nvidia do a stopgap GDDR5 node shrink GPU ahead of AMD's first HBM2 launch or a replay of the 28nm introduction where Nvidia turns their internet hounds loose with the "wait until you see what we have" mantra to try to minimize marketshare shift from being late with HBM2.
 
Quote:
Originally Posted by raghu78 View Post

More like HD 4870. AMD needs a product which brings back serious competition to the GPU market. AMD also need to address the gap in perf/watt, perf/ sqmm and perf/transistor against Nvidia GPUs. Maxwell is massively ahead in terms of perf/watt and slightly ahead in terms of perf/sq mm and perf/transistor. AMD needs to get back to ATLEAST parity if not a advantage in perf/sq mm and perf/watt. Nvidia is in a very strong position after dominating for the last year. By the time FINFET GPUs launch Nvidia would have enjoyed a monopoly situation for more than a year. Its very difficult to recover market share unless you have a better product than the competition when you are coming from so far behind. I hope AMD do not disappoint. AMD will surely die if they don't take back market share in CPUs and GPUs. The current situation is really bad and won't go on for long. One way or the other we will either see AMD perform or perish.
Nope, I agree with him, 5870 is pretty brutal, it pushed Nvidia nearly off the cliff, their GTX480 is barely faster than 5870 that comes with much more other disadvantages.
 
Quote:
Originally Posted by Vesku View Post

HBM2 isn't scheduled to be in volume production until Q2 last I read. So earliest I'd expect even a halo node shrink card would be April or May, from either GPU company. I guess we may see Nvidia do a stopgap GDDR5 node shrink GPU ahead of AMD's first HBM2 launch or a replay of the 28nm introduction where Nvidia turns their internet hounds loose with the "wait until you see what we have" mantra to try to minimize marketshare shift from being late with HBM2.
I can see it now:

Nvidia launch Pascal with 4-8 stacks of HBM1 in February-March 2016 and is the first one out with a new architecture.
AMD idles and wait for HBM2 availability and comes dragging behind with HBM2 in June 2016

Thats the "strategic" AMD I know...
 
Quote:
Originally Posted by iLeakStuff View Post

Nvidia launch Pascal with 4-8 stacks of HBM1 in February-March 2016 [...]
Interesting idea, but nvidia isn't even close to being at the same level as AMD when it comes to packaging. The chance of them being able to pack 4, let alone 8 stacks of HBM around a GPU is at this time unimaginable. What might happen is that nvidia goes with two stacks of HBM2, giving them a bandwidth of 512GB/s. Considering what they are able to do with 336GB/s (GM200) that should be plenty for the first generation of cards from nvidia on 14/16nm. As for AMD, they have already proven they know how to do this, so expect them to have 4 stacks on whatever their next top-end GPU ends up being. I also wouldn't be surprised at all to see HBM2 on most of AMDs next generation of cards. Less power, less complex PCB and even less PCB altogether all help save costs. And now that their tech isn't just a working demo, but have been actually launched in a real product, the next iteration will be better and cheaper to make. Remember how the 4770 was AMDs first product on TSMCs 40nm node? How did AMD's launch of the 5000 series go? Near perfect. The Fiji GPUs are, for better or worse, a "test run" for a lot of new tech. Let's just hope this "test" was worth.

TL;DR: Nvidia won't be doing any GPU with 8 stacks of HBM anytime soon, they don't have the tech yet.
 
Quote:
Originally Posted by iLeakStuff View Post

I can see it now:

Nvidia launch Pascal with 4-8 stacks of HBM1 in February-March 2016 and is the first one out with a new architecture.
AMD idles and wait for HBM2 availability and comes dragging behind with HBM2 in June 2016

Thats the "strategic" AMD I know...
Have you ever considered that AMD has been working HBM for a long time, probably as long as GCN has been out. Probably why the moved from VLIW4/5 to GCN because initially GCN was slower till you considered the ACEs. AMD just played the long game. I'm hoping next generation GCN provides better shader core performance.
 
Quote:
Originally Posted by Clocknut View Post

Nope, I agree with him, 5870 is pretty brutal, it pushed Nvidia nearly off the cliff, their GTX480 is barely faster than 5870 that comes with much more other disadvantages.
Not true at all about it being "barely faster". I had both cards and after a couple of driver releases I gladly sold the 5870 as it was no match for the GTX 480, especially when overclocked seeing that fermi gained a lot more from overclocking than compared to its competitor. I think most would agree that the 580 was a good bit faster than the 5870 and at the same time the 580 was only a little faster than the 480. The minimal FPS was also a very big improvement with the 480 over the 5870.

As long as you had a good cooler on the 480 temps could be managed well enough. I eventually put an arctic accelero 3 cooler on my 480 and had no trouble reaching 920mhz core stable with 72c load temps.
 
Quote:
Originally Posted by Jdjordan View Post

Not true at all about it being "barely faster". I had both cards and after a couple of driver releases I gladly sold the 5870 as it was no match for the GTX 480, especially when overclocked seeing that fermi gained a lot more from overclocking than compared to its competitor. I think most would agree that the 580 was a good bit faster than the 5870 and at the same time the 580 was only a little faster than the 480. The minimal FPS was also a very big improvement with the 480 over the 5870.

As long as you had a good cooler on the 480 temps could be managed well enough. I eventually put an arctic accelero 3 cooler on my 480 and had no trouble reaching 920mhz core stable with 72c load temps.
Not sure if you're thinking of the 580, because the 400 series was basically the HD 4k series in temperature output. Friend's 470 melted once, the plastic around the card.
 
81 - 100 of 173 Posts