Originally Posted by EniGma1987
Potential is pretty limitless IMO. We will hit a wall in process nodes for silicon soon, which will limit processor design potential due to size and complexity at whichever node we stop at. However there is work being done on new materials that will get the ball rolling again on design progress. I am no processor engineer obviously, but I would think that if you had the possibility to do as much as you wanted then we could just keep adding branch prediction, prefetch, decode, dispatch, etc units into the processor to keep scaling up instructions per clock (IPC) and with maturity of process and design experience comes higher clock speeds and more efficiency so we can basically just keep going for as long as it takes until quantum computing takes over completely and x86 is completely obsolete.
The problem is still x86 legacy. The huge performance boosts x86 has seen in recent times have been mostly due to introducing vector instructions. When x86 added MADD (fused multiply and add) instructions a couple years ago, it was considered a huge advance. This must have been amusing to the guys that designed the POWER ISA as it's had these same instructions for decades.
The biggest x86 problem is (and must necessarily always be) that instructions are of variable length and require extensive logic and branch prediction that just doesn't exist in POWER or just isn't needed for the same level of performance. When you discuss the maximum potential for any architecture, there will still exist architectures for which more maximum potential is possible. POWER has a higher maximum potential than x86.
For any who doubt that x86 decode is expensive -- look at the basic x86 integer instructions and implement a decoder to a RISC instruction in verilog. After that, implement a decoder for something like MIPS (it's the best documented and most discussed architecture on the planet due to being so popular in college courses). Once you've done this, you'll realize two things. The first is that logic design is far different from what you thought (as is almost anything you've invested thousands of hours learning). The second is that the x86 decoder is HUGE compared to the RISC decoder and it doesn't get better when you add more instructions. Intel realized the x86 problem years ago, but came to the wrong solution (VLIW) and chose to continue x86 because of inertia (they own the Alpha ISA and they sold StrongARM after making it, at the time, the best ARM design on the market). AMD breathed life into x86 because they had no other choice. They didn't have the money to compete with the big boys, so they stuck it out with x86 and forced Intel to go along (or lose big). AMD now seems to be betting on ARM. They are wanting to use ARM instead of x86 despite their costs in making a new micro-architecture. They must believe something's wrong with x86 else they wouldn't invest so much in another ISA.