Originally Posted by CAHOP240
Well, I've been going crazy trying to find news on the 6970 and how it might stack up and couldn't find anything....so........I decided to do some calculations on my own.
The 6870 has 1120 shaders. So far the 6970 is rumored to have 1536 shaders. Feel free to check me if I'm wrong but I figured it would go like this:
1120 = 100% = 11.2% per core
11.2 * 1536 = ~172% or a 72% increase in performance (if core speeds and everything else is the same)
Looking at the Guru3D Review
I extrapolated these numbers:
A 6870 scores 46fps
in BC2 all maxed out a 1920x1200
That means a 6970 should score about (46*1.72) = ~79.1fps
A 6870 scores 28fps
in Crysis:WH (enthusiast) at 1920x1200
That means a 6970 should score about (28*1.72) = ~48.2fps
And so on and so fourth. The math adds up when you look at 6850 performance.
6850 has 960 = ~ 85% performance of a 6870
46fps * .85 = ~ 39fps. The 6850 scored 38fps in the review. (BC2 maxed out at 1920x1200)
So if my math holds up, a 6970 should be about 72% faster than a 6870. That puts it right up there with a 5970 give or take a few frames per second. If priced right this will DEFINITELY be my next upgrade!
Let me know what you guys think.
good one! I also did a little speculative math a few days ago on Tom's Hardware in the article about the possible delay of the 6970, and I took the time to rewrite parts of it now that we have the GTX 580.
It goes like this:
1. If the 1536 core number is correct, there is no plausible explanation for a manufacturing problem, given that the 5870 has 1600 cores and probably a bigger die size. If AMD (unlike Nvidia) got it right the first time, why would they screw up on this one ?
Now, if we take into account the optimizations that Barts brought, where a 960 core GPU is equivalent to something like a 1340 core Cypress (a little slower than the 1440 Core HD 5850), and a 1120 core GPU is equivalent to a 1500 Cypress (slighly slower than the 1600 core HD 5870), then I could say that, on average, AMD has "gained" around 380 cores with their optimizations.
In that case, a 1536 core Cayman GPU will have more or less the performance equivalence of a hypothetical last generation 1980 core Cypress. And this makes sense, since this number, give or take, was circling around the web for a while. They just didn't take into account the performance optimizations where AMD has managed more for less.
Taking this into account, let's make a small (yet again speculative) comparison:
The 5850 has a 160 core difference to the 5870 (1440 -> 1600)
The GTX 470 has a 32 core difference to the GTX 480 (448 -> 480) and again another 32 core difference to the full 512 core GTX 580.
Now, the GTX 480 is faster than the 5870, and the GTX 470 is faster than the 5850, but slower than the 5870.
So, and given the differences in cores of these parts, and taking into account that each GPU maker makes the necessary adjustments to GPU and RAM clockspeed to give each core increase a worthwhile difference in performance, one could say that:
- 448 Nvidia cores is faster than 1440 AMD cores. But an increase in 160 cores makes the HD 5870 faster (with corresponding faster GPU and RAM clocks, as I said above, of course).
If you were to give the 5870 another 160 cores, you would probably get GTX 480 performance or slightly better.
So, say 1600+160= 1760 cores.
But now that Nvidia released a GTX 580 with the full 512 cores, then the distance would remain the same. Now, if you add another 160 cores to the AMD part: 1600+160+160 = 1920, which would put them both competing on the same level.
But 160+160 = 320. Given that AMD has gained around 380 cores in optimization, it's possibly slightly better than Nvidia.
Now if you factor in the fact that Nvidia made a few improvements to their own architecture, the math becomes more complicated.
In my opinion, it might all be down to effective GPU and RAM clockspeed.
This is why AMD was probably trying to figure out how Nvidia was going to market the GTX 580, and why very little information is known about the specs of these chips.
In fact, it's of no wonder, both companies had plans for 32nm and they had to rewrite them and adapt to another generation in the same node. And the improvments that can be made are not limitless. Now that AMD knows what the Nvidia card can do, they can finalize the BIOS, GPU and RAM clockspeed and make some last minute driver optimizations.
I think this will be closer than we might think.Edited by tpi2007 - 11/10/10 at 9:45am