Clock speed is a marketing plot it does not denote how much performance a chip has.
Its like the megapixel race in cameras specially in point and shoots.
Intel has been forced to adopt more and more RISC like strategies - so their current processors that to a programmer / developer appear to be CISC, because they have a CISC instruction set - are actually RISC-like internally, especially since the vast majority of developers never actually do any low level functions, but rather, do their programming with high or mid-level programming languages, quite often programming through API interfaces rather than with the actual hardware.
The numbers do not mean much for general purpose use. For the most mart, the vast majority of users will never actually see any difference between say, a Phenom II versus a Sandy Bridge. Most users are using light weight tasks that do little computation, tasks that are speed limited not by the processor, but by peripherals. Like, what difference does it make in printing speed, since USB is the speed limitation, or the printer can only digest so much at a time - or for web browsing, where it is network latency that makes all of the difference. The only difference might be with gamers - since they will push the envelope and the processor difference may add a FPS or so. Or with people doing heavy duty rendering for animation or CAD - which is a small segment of the market as well.
It is much like cameras, since megapixels mean little since most people are not creating billboard sized signs out of their pictures. Most people will view their pictures on their computers - displays that are limited to 75DPI. They may print their pictures, in which 300DPI is entirely sufficient because really, higher DPIs are possible but not discernible by the eye. In fact, most people will end up borking their high-megapixel pictures by having bad settings which overcompress their jpegs - rather than going high quality with RAW (or liberal jpeg settings which do not dither the colours into submission).
This has long been a game in the computer industry, pushing out meaningless numbers in the everlasting contest of overkill.
As for something like the i5 - internal microcode that yields more instructions that execute in a single clock cycle, couples with more efficient superscalar operations - make up the performance difference. But an i5 is still borked by having to interface with slow DRAM, slow hard drive interfaces, and the constant need to play nursemaid for all of the other interfaces, like USB that really uses lots of CPU.