Originally Posted by matty0610
Originally Posted by CCast88
ill leave AMD for the rest of my life if Fud is actually right
I thought 30% faster than last years flagship is actually a good thing. Am I missing something here?
Every generation but the last one. (Which was more of a half-step than a full step anyhow.)
x1950XTX CFX < HD3870/HD2900XT (iirc anyway?)
HD3870x2 = HD4870
HD4870x2 = HD5870
Originally Posted by Neroh
Wont be long before we know for sure how good it is. every leak contradicts the others so im pretty skeptical
They're using pre-production drivers and comparing using 3DMark, with the words that "in game performance is better". I'm willing to bet the original HD6990 in a single GPU is about right.
Originally Posted by Dmac73
Originally Posted by iamgaming
look at improvement from HD 4000 series to HD 5000, HD 4890 to HD 5870 (highest single gpus from each series), HD 7970 should be faster more than 40% cause it's not a revised polished version, it's completely new architecture, though speaking of new architecture nvidia fermi was a failure when it was introduced with GTX 400 series.
Same amount of ROPS, and GCN is more of a rearrangement than brand new architecture, it's still borrowing a LOT of similarities processor wise from its predecessor, as far as i've seen and read.
More stream processors is all it's really bringing to the table, as memory bandwidth wasn't a major limiting factor on Cayman. 30% seems accurate. Time will tell.
It won't touch a 1024C/512bit GTX 780.
Link to the architectural details? I wasn't aware that AMD would have any released, I thought it'd have been under NDA still.
And that GTX 780 won't be out for 6+ months after this, more than enough time for AMD to refine like crazy. That said...I doubt 1024 shaders could be easily done, even on 28nm...nVidia's shaders are a lot bigger than AMDs, after all. I wonder where WCCF got their numbers from.
That said, nVidia will have slightly higher performance...It's shaping up to be a cycle now. AMD releases new GPUs, nVidia releases a few months later with faster GPUs. AMD and nVidia release revisions on the same node later that year.
Originally Posted by OmegaNemesis28
about 30 percent faster in 3DMark
when compared to the Radeon HD 6970.
It already basically says "not true for real world performance"
Exactly. They don't even say which 3DMark, if it's 3DMark11 then that'd be not much better tessellation performance (Or the drivers aren't optimised for the new chips yet) but if its Vantage then it could just be because 3DMark tends to suck at showing real world performance these days.
Originally Posted by Phil~
Being a new architecture the yield will naturally be low (think Fermi here) before the optimizations kick in. Not surprising.
Nah, 40nm was a bad node. AMD only (Mainly) bypassed the issues on it because they made the HD4770 on it first to work out the problems before they released flagship cards on it. The new architecture thing may or may not end up giving it bad yields.
Originally Posted by Polymerabbit
Originally Posted by Squirrel
Is there some kind of law that states a new card needs to be at least 60% faster but will still remain bad? Why not 59% or 61%?
Unfortunately the english language lacks a single word for "greater than or approximately equal to".
If the 7970 has approximately 60% more performance then a $450-500 price tag is fairly justified and it should also fair well against the GTX 680 (Or 780 if nVidia decide to skip numbers again). 30% more performance would not cut it in that regard.
Then type it out programming style like I would. And >=50% performance is where you'd be looking for at least, for launch drivers. (Equal to the last generation dual GPU card)
Originally Posted by dkL33t
HD 6970 average FPS on Crysis Warhead at max settings on 1920x1200 is 30.2, then
30.2 x 1.3 = only 39.3 average fps?
If that's true, then I'm waiting for nVidia.
Read the actual article, 30% for 3DMark
with better performance in games
Originally Posted by dkL33t
i think i would be happy with a 10fps increase. i think i could squeeze another 10 out of a nice overclock... thats damn good imo.
I'm expecting at least 20 fps better on Crysis Warhead than HD 6970, and THEN OC it to make it 30fps better. This card, if the information is true, will not get my money; instead it probably will go to a 580.
I doubt any new generation in the near future will get 30fps after overclocking in Crysis...nVidia or AMD. I wouldn't be surprised if this was say, 15fps extra average and about 20 after overclocking with nVidia being 20-25. (Assuming a 1024 shader beast can actually clock high...I seriously can see it being another Fermi)
Originally Posted by Nowyn
I just love love how everyone thinks GCN architecture will be superior.
Until we have reliable benchmarks it's impossible to say. Past VLIW architectures were gaming focused, they sucked in GPGPU compared to nVidia's offerings. Now if we read any GCN arch analysis it's clear that goal of GCN to push GPGPU to catch up with nVidia, hence additional core elements and caches that doesn't accest gaming much while increasing core size making GCN dies bigger than 69x0 even on 28nm. With that said GCN can be faster, the same or even slightly slower in gaming when compared to 6 series VLIW cards. GPGPU tweaks can help with performance in gaming, but there are also can be a tradeoffs.
So unless we have reliable hard data it is just pointless to predict how good GCN cards will be.
The VLIW architecture was good for GPGPU (Theoretically superior to nVidia's architecture) but programming that wide for it sucked, as did the tools to use the GPGPU aspect of it.
Originally Posted by drbaltazar
if your computer is doing everything at 24p and only screen up it to 60p internally after the computer is done rendering everything at 24p.gpu power become highly over rated.i wonder when intel will optimiae for email@example.com soon for gamer on a budget.oh it wont be perfect but only elitist will be able to tell the diff once intel is done
24fps is only smooth if it never differs from 24fps, no dips or highs. 30fps minimum is really what's required for 3D because you can't have it at a steady fps like a movie.
As for 1080i vs 1080p...That's up to monitor manufacturers, and when I had a Xbox 360 1080i looked like crap compared to 1080p just going by movies.
Originally Posted by LBGreenthumb
I don't put any stock in these pre-release benchmarks. Mostly because we all know AMD always has issues with their drivers which can hold back a card 5-10% (possibly more) in overall performance.
Indeed, all new GPUs tend to have sucky drivers at first. nVidia got a 10-15% performance increase from the 256 series drivers for Fermi based cards iirc.
That said, 30% improvement is great for 3DMark.
Originally Posted by Majin SSJ Eric
Originally Posted by Nautilus
What happened to the statement: "7970's performance comparable to 6990?"
I'll be disappointed if it doesn't outperform 6990.
I think you'll probably be disappointed then. We should know soon enough...
No, I doubt s/he will. It's a new generation and probably will equal the last-gen dual GPU card. 30% in 3DMark would actually be putting it up there.