This has got to be the best-written article I've ever come across on Tom's Hardware ('cause to be honest, most of them suck...). It's not a short article, but give it a read, it's definitely worth it!
Letâ€™s take a trip back in time â€“ way back to 2003 when Intel and AMD became locked in a fierce struggle to offer increasingly powerful processors. In just a few years, clock speeds increased quickly as a result of that competition, especially with Intelâ€™s release of its Pentium 4.
But the clock speed race would soon hit a wall. After riding the wave of sustained clock speed boosts (between 2001 and 2003 the Pentium 4â€™s clock speed doubled from 1.5 to 3 GHz), users now had to settle for improvements of a few measly megahertz that the chip makers managed to squeeze out (between 2003 and 2005 clock speeds only increased from 3 to 3.8 GHz). nvidia CUDAZoom
Even architectures optimized for high clock speeds, like the Prescott, ran afoul of the problem, and for good reason: This time the challenge wasnâ€™t simply an industrial one. The chip makers had simply come up against the laws of physics. Some observers were even prophesying the end of Mooreâ€™s Law. But that was far from being the case. While its original meaning has often been misinterpreted, the real subject of Mooreâ€™s Law was the number of transistors on a given surface area of silicon. And for a long time, the increase in the number of transistors in a CPU are accompanied by a concomitant increase in performance â€“ which no doubt explains the confusion. But then, the situation became complicated. CPU architects had come up against the law of diminishing returns: The number of transistors that had to be added to achieve a given gain in performance was becoming ever greater and was headed for a dead end.