What about the 50% increased performance is even in question? The architecture is a known and tested one (GCN) and the exact compute power available has been given. The scalability of the architecture is also a known quantity (from chips much smaller to chips much larger). Similarly, the CPU, while not a known quantity, is known to be the same core with the PS4's running at a higher clockspeed.
The only unknown value is 8GB DDR3 2133 with eDRAM vs 8GB GDDR5 5500 and the uncore of the CPU.
As most people know NOTHING about the real specs of GDDR5, I provide a link to some hynix datasheets from 2009 (the specs have undoubtedly improved in several areas):
Most people here don't seem to realize, but GDDR5 specs are almost entirely based on DDR3 specs (as in the GDDR5 specs were copy/pasted from the DDR3 specs). Commercially available DDR3 2133 has a CAS between 9 and 11. The GDDR5 5500 from that datasheet has a CAS of 16 to 18. At worst, this means the CAS is twice as much. If we assume that the overall latency of GDDR5 is twice that of DDR3, then the real latency on the GDDR5 is LESS. This is because the GDDR5 takes 2x the cycles, but is 2.6x the clockspeed which makes the TIME used less overall (that's 9 / 2133 and 18 / 5500 if you want to crunch the math for CAS -- the rest of the latencies are on you to look up). When a CPU sends a fetch request to RAM, it doesn't care how many RAM cycles it takes, the CPU only cares that the data is delivered before x nanoseconds have passed and it needs to stall the pipeline to wait.
Now, the latency comparison I wrote above isn't quite true. Overall, GDDR5 still lags DDR3 in real time, but design and manufacture process improvements do improve these numbers. Why don't CPU manufacturers use GDDR then? Because GDDR5 is more expensive and DDR4 is almost here bringing improvements to general memory architecture and more importantly, reducing power consumption. We may still see some unified architectures from AMD that use GDDR5 though as it would probably speed up their HSA plans.
If Sony added custom L3, then I would guess that the impact of the GDDR5 is zero. If not, the impact is still negligible as games aren't extremely sensitive to RAM latencies anyway (either on the GPU or CPU side).
What I find interesting is MS's interest in eDRAM. They loved it enough to spend 1.6 Billion transistors on it. I don't doubt that they were forced to reduce the GPU size in order to keep costs and heat dissipation down.
I suspect that general programs won't have access to much of the eDRAM though due to kinect (and here's why...). MS has been pushing on tablets to reduce input latency to the single digit ms range. With kinect, response time is also critical (any of us who used it can tell you that the response time of the current kinect is extremely noticeable). What are the main bottlenecks? Kinect gets input from the cameras which is sent to the northbridge which is sent to the CPU which stores it in main memory. After this, it is pulled back out (piece at a time) and sent to the CPU for processing by the facial recognition software (I guess it's "body recognition" or something) and then applied to the output is sent to the GPU for the next frame. (camera -> northbridge -> CPU -> RAM -> CPU -> next frame)
Of all these parts, which can be reduced? the biggest reduction in response time is reducing the memory access times (eDRAM is perfect here). Next, you have kinect bypass the northbridge and deposit the frames directly into eDRAM (a small ARM core would be great here and they probably already have other ARM cores handling the DRM hypervisor anyway). Since the frames are now HD, it's probably better to do the facial recognition with the GPU. Now the trip is camera -> eDRAM -> GPU -> eDRAM -> CPU (to process inputs) -> next frame (or camera -> eDRAM -> CPU -> next frame) which should be a large increase in response time.
For what it's worth, John Carmack approves of the PS4 design (and off-topic: he seems bit by the lisp bug)
are you suggesting that XBONE is almost fast as PS4 ?!!