Originally Posted by fateswarm
The chip is tiny. Plain and simple. You can't take a practically phone/tablet chip and make assumptions on GPUs of the calibre of Titan Black.
The 750Ti is nowhere in the same league as a phone or tablet chip. It's at least an order of magnitude more powerful in almost every possible category.
It's like saying "wow, my mobile plays youtube, imagine it how powerful it'd be if it's only 100 times heavier". It's not that simple.
In a low-stress application such as YouTube, yes you would be correct. A 780Ti and GTX 550Ti will play YouTube at, to the viewer, identical rates. That does not mean the 780Ti is not massively more powerful than the 550ti, but that the test does not reveal their differences.
GPU architectures generally scale more or less linearly with added stream processors/CUDA cores. GPUs and the graphical APIs that rely on them depend almost entirely on massively parallel calculations. The very nature of the computations makes a larger number of execution units profitable.
To quote a past example, the GTX 460 has 336 CUDA cores while the GTX 480 has 480 CUDA cores. That is exactly a 70% ratio. And looking at benchmarks shows that the 460 (clock for clock) is
in fact right around 60-70% of the 480's gaming performance. (Though other factors such as clockspeed and Raster units affect this greatly)
And again imagine AMD, who is practically within 1 to 5% margin of difference between NVIDIA and their own products. Do you seriously they are incapable of making an "incredibly superior" 28nm chip but NVIDIA can? Wake up.
I don't believe that prava said anything of that sort.