Originally Posted by raghu78
Tahiti was the first chip to release on TSMC 28nm. it must have taped out in late Q1 2011. it released on jan 9, 2012. AMD's design must have been very conservative as it was on a immature process. secondly HD 7970 Ghz is the same silicon with increased voltage and a different BIOS. HD 7970 Ghz loses efficiency as its voltage is increased to 1.25v from 1.175v on original HD 7970. the ref AMD HD 7970s easily overclocked to 1100 mhz at 1.175v and were more power efficient than HD 7970 Ghz.
Pitcairn (212 sq mm) performs very well on perf/watt and perf/ sq mm wrt its Kepler counterpart GK106 (214 sq mm). also with HD 7790 Bonaire AMD fit 30% extra perf within the same 85W TDP of HD 7770 . the newer power management allowed AMD to fit the 30% extra perf for just 37 sq mm extra die size. so Kepler is not an inherently more efficient architecture than GCN.
1920 x 1200 perf/watt
HD 7970 Ghz - 77
HD 7970 - 91
GTX 680 - 95
GTX 660 - 100
MSI GTX 650 Ti boost - 100
HD 7870 - 107
HD 7790 - 107
HD 7850 - 108
by increasing the front end to 4 ACE, 4 geometry engines, 4 raster engines and ROP to 48 AMD can extract better per sp performance. even with 2048 sp a 10% improvement is very much likely. add to it 25% more stream processors and some aggressive chip binning with core clocks of 1 Ghz at voltage of 1.175v - 1.2v and there is no doubt AMD can increase perf by 30% at same TDP for a die size of 450 sq mm. on a mature TSMC 28nm process that should not be a problem. moreover if Nvidia can get 7 billion transistor GK110 running at close to 1 ghz max boost within 250w tdp AMD can definitely fit a 5.5 billion transistor HD 9970 running at 1 ghz
A huge amount of the extra performance with bonaire comes from the increased memory bandwidth. The memory on the 7790 is clocked 33% higher than on the 7770 and memory was and is one of the bottlenecks on these mid range cards that have a very conservative 128bit memory bus. Yes the chip is more efficient than the older 7770 but it also still has the same amount of ROPs and the same memory bus. Only the core count has been increased.
Also AMD has made GPUs on immature processes before and it didn't mean big performance improvements in the future on the same process. 5870 to 6970 springs to mind. Very early 40nm design, and even the architecturally improved (VLIW instead of VLIW5) 6970 didn't really bring much extra performance to the table while lower end designs did do that. You can't just compare low end designs (especially ones that are so dependant on memory speeds) to high end designs with big dies.
And while we're on the subject of comparing; AMD clocks =/= Nvidia clocks. Kepler designs just run at a higher clock speed compared to their die size. Nvidia sure did take a pretty big clock speed hit when you look at the 770 compared to the 780/Titan. Around the same voltage ranges, a huge drop in clock speeds. The same thing would happen to AMD. If they were able to release GCN chips that clocked as well as similar kepler designs then why haven't we seen tahiti/pitcairn designs that boost to 1300mhz or close to it at stock like some 770 designs do.
But anyways, the final point is this, you yourself said that an improvement that big might take a 450mm^2 die. Is there any proof that AMD has had any intention of making a big die? What are the chances of them suddenly leaving their small die strategy only after a couple of years? What incentive would AMD have had maybe 2 years ago to start designing a 450mm^2 28nm GPU?
I mean the problem with the prediction that this can be done and will be done is that there are some very questionable and unlikely stuff that needs to become true first.
- AMD needs to abandon their "sweet spot" strategy
- AMD has to be able to be the first to deliver 30% more perf (this doesn't beat the Titan, or any OC'd GK110 chip) without adding more power consumption
- AMD has to be able to clock their big die as high as their smaller die and do it without extra power consumption
- The differences we see with 7770 -> 7790 can't have much to do with memory bandwidth
I just don't see it. 30% extra perf with a much bigger chip clocked at the same frequency and no extra power consumption? I'm sorry but do you actually see that as a plausible outcome?
Originally Posted by Redwoodz
I think you need to look at past results a little more objectively.Kepler was not a bigger jump forward than GCN,in fact it was the complete opposite.Kepler was a move BACKWARD for compute performance.Where are the dollars at? Compute-not gaming. Clearly AMD's current line-up is superior to Nvidia's in compute performance,as the last review I read matched the 7770 to the 680 in compute performance. The game is no longer the same.AMD's console wins,along with HSA and Open CL performance will mean a shift from previous releases. There is no doubt AMD's 9,000 series will crush the Titan in these areas and that will be a win for AMD.
I'm taking about pure gaming performance. And yes Kepler was a bigger leap ahead in pure performance than GCN was.
And when it comes to compute. GeForce GK104 =/= Kepler. Nvidia intentionally restricts FP64 on most of their GPUs, that does not mean that Kepler is a bad gpgpu design or something. It just means that Nvidia doesn't sell GeForce GPUs with their compute abilities. Aside from the Titan that is.
Kepler is a huge leap ahead in gpgpu as well. I wonder why GK110 was such a sought after item in the supercomputer market? Might it have been because it completely obliterates fermi when it comes to perf/watt and outright performance?
And to add to this; NV still holds around 90% of the GPGPU market. CUDA isn't going anywhere even if OCL gets more support.Edited by Alatar - 7/21/13 at 9:20am