Temperature has very-nearly-nothing to do with dissipated power. Case in point, when a stock HSF is upgraded to an aftermarket cooler, the temperature of the chip drops significantly, but the dissipated power does not change notably unless the user changes the voltage, or clocks, or loads. Furthermore the temperature reported by chips can't even be compared as they are often using different methods to derive temperature readings anyway. Further still, reducing the physical size of a chip, causes a reduction in thermal conductivity available to the HSF.
Even Intel Sandy bridge (32nm process) produced (and still produces) greater compute efficiency than modern AMD 32nm offerings. Sandy bridge was a very mature and well refined design right out of the gate, while the release of bulldozer was, to put it kindly, extraordinarily unpolished on release. The AMD architecture is still incurring much higher "miss" rates internally (wasted cycles, high cache latency issues, poorer scheduling and prediction).
That said, an FX-8320 for ~$150 could undoubtedly be "tweaked" by an end user (Something that intel doesn't offer the option to do until over $200 CPUs) to a lower voltage, mild under-clocking, to achieve a compute efficiency that is competitive with intel "stock settings" chips.