Eh, I see it as bound to happen eventually, especially if we hit that wall on how small our chips can go before we have another semiconductor to use, I think that software developers would use more assembly and hardware engineers use more hand design to wring every tiny bit of performance out of a current design.
The original P4 (Willamette) was a pile of junk unless its new stuff was used, Northwood fixed its problems and came out and competed well...It was Prescott that proved the P4 architecture was crap.
There technically is a way to go back, if they used the CMT design on an Athlon64 style core (Which, despite what Chew* was saying, could be optimized further with a better IMC, better instruction decoding and a slightly different pipeline for example) rather than the entirely new architecture that BD is, but then most of the issues seem to come from the cache and front-end, both of which could be fixed in Piledriver quite realistically.
I personally think AMD bit too much off at once, they went for the Athlon64 like thing with two major new milestones for them (IMC and 64bit for the Athlon64, CMT and the new architecture for BD) but this time it was a bit too much, CMT on a slightly more optimized and 32nm Stars Core, then a new architecture would have worked out better IMO.
But then again if Piledriver solves BDs problems and is a good chip, not many people will think about BD, I mean, how many people point out the problems with GF100 any more?
To be fair, the Pentium 4 was good until the Athlon64 came out, Northwood competed with what it was meant to (Athlon XP), Prescott was the failure where Intel paid off Dell, HP, etc. (And that was mostly Socket 775)
And no, AMDs marketing department realized Zambezi is a sucky chip and tried to market the hell out of it, it's not a design with inherent problems in the entire idea of it (Eg. Netburst, not that anyone knew until Prescott ramped up the clock speeds past 3Ghz) that we know of, so far it seems like just a bad execution.
For example, no-one said the entire Fermi architecture was bad because GF100 is so leaky, and GF110 proved it isn't.
Only in the enthusiast market, if that.
You buy an AMD laptop, you'll probably get an AMD GPU.
It's why Intel has the most GPUs overall due to nearly everyone using IGPs.
Theoretically it is the future, and so far it seems like the execution of it is the issue, not the actual idea itself.
I wouldn't be surprised if AMD viewed Piledriver as its main server/enthusiast CPU and just released BD as-is to get product out before 2012 and to get more engineers on Piledriver.
It does, under certain circumstances.
In other news, the 286, 386, P5, P6, Netburst and K5 (All entirely new architectures like BD) had the exact same problems at first, either their new technology wasn't utilized (386, P5, P6, Netburst), their clock speeds were too low to begin with (286, 386, P5, Netburst considering its low IPC) or/and they used too much power to really be worth it (P5, netburst)
All of them had chips that came out later that fixed the problems (286-12, 386SX-33, Pentium-75/90/100, Pentium II, Northwood) and I can see the same thing happening with Piledriver.
That said, I'm not waiting until Q1 to find out, I'm upgrading to socket 1155 as soon as I can.
Once you include IGPs (The biggest market) that number changes dramatically, considering most AMD laptops would sell with an AMD IGP, and quite a few Intel laptops have AMD GPUs in them due to their low-end being updated more often than nVidias. (For example, I have a HD545v in my Samsung R540)