Price of mm^2 in chips gets exponentially cheaper. Its never linear, so your theory doesn't apply: for instance, that 502mm^" could be triple or four times the cost of those GK104, easily.
Originally Posted by Cloudfire777
Lets see here:
GTX 680, TDP 195W
7970GHz, TDP 250W
These more or less equal each other in performance, +5%ish in average better performance with 7970GHz.
Why is it hard to believe that Nvidia could make a GPU that have 55W more headroom to match 7970GHz TDP, will beat it? Throw in Kepler architecture improvement in the mix, its not difficult to see in my eyes...
Because It can't be done. GK104 was a chip created to render and, thus, games only. Its a flop in computational performance, but on the other fronts its a BEAST. Now, are you telling me that you plan to use a chip that wasn't optimized for rendering (aka gaming), and that without improving the litography node you claim it to be more efficient than Kepler? Really? At 300W TDP I'd bet that Titan could be 50% more powerful than Kepler... but at 30% more TDP, I don't expect more than 40%, and that would count for a miracle.
And yes, miracles do not exist. Normally the performance/power for any integration node remains stable throughout its life. Sure, there are always products that perform horribly in that aspect (such as the GTX480) but go ahead and compare de 40nm node, the 65nm one and all the other ones and you will see that for each company performance/power is constant. And, keep in mind that GK110 was never thought for gaming, but for computational purposes, so there is no reason it should wreck Kepler in terms of efficiency.