Seriously it's so annoying seeing everyone speculate about Piledriver and desktop Trinity. This article is about 17W Trinity for netbooks/ultraportables.
Originally Posted by hajile
Intel dropped TDP by dropping clocks. Why can't AMD?
This isn't bobcat based (28nm bobcats are DOA). This also isn't 28nm (all bulldozer and phenom are full-node chips, so the next shrink is 22nm in a year or two).
If piledriver can fix the problems with bulldozer, then this is very possible. Fixing branch prediction (just to be on par with Phenom II) should net 5-15% performance improvement. Fixing decoder width (probably 5-6 units instead of 4) would net another gain of 5-20% (depends on how instruction starved the system is). Better fabs could net 5-30% increase due to reducing cache latency and could net even more through higher turbo clocks. I'm not that optimistic, but I am interested.
A more efficient die layout that borrows some cues from GCN combined with more 32nm full-node optimizations could net huge power savings for the GPU. Switching to VLIW4 gives 10+% improvement per shader. If fabs allow a higher clockspeed, then keeping the 400 shaders could be possible.
So let's just assume it's a tweak of Llano that uses 17W instead of comparing to Brazos/Bobcat.
First, Llano is 32nm. Trinity is 32nm. The current 35W Llano uses a 400 SP GPU. Now, 17W Trinity might be around 240 SP's at best. No way you're squeezing 400 SP's in it, (minimum you'd need to get a 50% GPU gain over 35W Llano - and that's asuming 20% gain from faster RAM, 10% from VLIW4, and 30% magic pulled from thin air).
Realistically, the CPU gain will be pretty small over Llano, not that anyone cares. What you guys are assuming is Trinity will have a GPU with over twice the performance per watt, on the same process. This is just ridiculous.