Oh, I know that not having to use an API like DirectX or OpenGL will add to performance significantly. At the same time, if you look at a modern game on consoles (say, Skyrim - one of the most popular games on PCs/consoles to date), you currently need something along the lines of a 5770 to achieve console-level performance (on a PC), so the 5770 is being used about as well as the Xbox's GPU (which is a tiny bit weaker than an X1900 XT, I believe? Same architecture with a few tweaks, but fewer transistors).
Here are some benchmarks with the X1900 XT in, on PC:Half Life 2Far Cry
You will notice in both of the above benchmarks, it is within about 5% of an Nvidia 7800 GTX (512mb) - i.e. if we adopt a 5% margin of error (which is not unreasonable in one-off tests), we can call them approximately identical in performance.
Here is a benchmark showing multiple graphics cards. Not the x1900, but its bigger brother, the x1950:
You will see something around a 50-80% increase in score going from the X1950 to the Nvidia 8800 GTX - the 8800 GTX - a card practically identical to the 9800-series that would later replace it.
Here is a benchmark including an 9800 GT and the 5670:
You will notice a pretty large difference in performance between the 9800 and the 5670 - between 5-20% throughout the review.
You will also notice the 5750 - a card significantly slower than a 5770 at the top of the list, achieving somewhere between 25-100% higher framerates (depending on minimum/average results) regularly.
The 5770 is not significantly more expensive than the 5670, and while I know that a console will get more out of it than a PC ever could, it does not change that for a slight increase in power consumption and cost, they would approximately double the graphical power - which would lengthen the console's lifespan significantly. If required speed of a chip increases in anything resembling Moore's Law, doubling power will lengthen the life by 1.5-2 years, at the very least - significantly offsetting research/development costs, meaning the end user ought to get a similar price as they would were it using the 5670, right?
Even if that is not the case, can you see how this is *not* a huge increase over the current graphics in the Xbox 360? It's been nearly a decade since it came out, and will be over a decade by the time its successor hits the shelves. Graphical performance ought to increase by more than double in that time. The fact here is that if it uses something akin to a 6670, it will not.
Pre-posting edit: My mistake. I thought the 5670 was the same as the 6670 rebadged, like the 5770/6770 is/was. It is not. Here is a comparison:
Here you will see that the 6670 is about 10-20% faster than the 5670. It means it's a slightly better upgrade than I thought it was, but the majority of my point still stands.
The 6670 is not powerful enough to provide a large increase in graphics performance for the length of time required for a next-gen console. Even with the increase that not having to use an API will give to performance, it might perform about as well as a card with four times its power on PC (much as the X1900 is performing like a 5770 now - a card only two to three times as powerful).
The 6870 is approximately three times as powerful as the 6670 (with a bit of approximation, at least - 5770 crossfire is about the same as a 6870, and the 5770 is about one and a half times the performance of a 6670).
That means that using it, at best, we'd be looking at an overclocked 6870 in performance. Seems fine now, right? Except in five years time, the 6870 will be about as powerful as integrated graphics (if APUs continue their upward trend). Tablets will be almost as graphically impressive and phones will be performing better due to their fewer pixels.
I cannot imagine that that is a situation any console manufacturer wants to be in. Hence why I cannot imagine why they would do this.