Originally Posted by Fyrwulf
Processors are composed of transistors. Transistors function by inducing electrons to flow in directions they don't want to. Using energy to force such an action is the very definition of work.
No, that is not how transistors work. At all. They don't "force electrons to flow in directions they don't want to". You apply a voltage to the gate of a FET, and its resistance goes from near infinite, to zero. It turns from an insulator, into a near perfect conductor. The electrons flow not because they're being forced to, but because there is now no resistance, when there once was.
You can hook a capacitor up to the gate of a FET, charge it up, and that FET will stay conducting for pretty much forever as long as that capacitor has almost no self discharge. If there was work being done by the transistor to make the electrons flow, this would not be possible.
Originally Posted by looncraz
Not 100%, actually. I did a *lot* of real-world testing back in the Pentium 3 era with peltiers, heatsinks, heat sources, power consumption, and heat output. CPUs were quite efficient at producing heat from their input power, but were not nearly as efficient as high impedance coil of wire.
This has been many many years (obviously), but I think the 'best' number I had for heat generation from a CPU was about 70%.. overclocked, overvolted, and under multiple types of loads concurrently to fully tax the CPU.
The rest of the energy, I feel, went into outputs or out the grounds or was transformed into another form of radiation that did not interact with the CPU or heatsink to produce heat.
There's a few obvious flaws in your testing, including one glaringly huge one. A lot of the heat generator by the processor will be conducted through the pins, into the CPU socket, into the huge plane of copper that is the motherboard. Another major one is that it's highly unlikely you measured power consumption after the VRM. To do this accurately, you would have to isolate the entire CPU from the motherboard board using some kind of crazy high speed data cable, encase it in a calorimeter or similar device. Then, you'd have to measure the VRM output current and voltage (~1.1v @ 60-90 amps).
Unless you did all of that, your testing is highly flawed.
A CPU can 'consume' 100W and put out only 50W of heat, which is precisely why we have to create TDP figures - and why TDP figures from AMD and Intel aren't compatible. AMD CPUs 'leak' more of their current to the ground, which doesn't generate much heat. Intel CPUs put that current through more circuitry, generating more heat, but accomplishing more at the same time - making for more energy efficiency in terms of processing power.
You have a severe misunderstanding of basic electric theory. Any current that leaks to ground WILL generate heat within the chip! Semiconductor leakage creates a virtual resistor to ground. Any current flowing via leakage generates heat within the chip!
What throws your whole argument out, is that AMD literally defines TDP as the maximum power the processor can draw,
and that a thermal solution should be capable of dissipating all of that power.
Originally Posted by AMD
TDP. Thermal Design Power. The thermal design power is the maximum power a processor can draw
for a thermally significant period while running commercially useful software. The constraining
conditions for TDP are specified in the notes in the thermal and power tables.”
- TDP is measured under the conditions of all cores operating at CPU COF, Tcase Max, and VDD at
the voltage requested by the processor. TDP includes all power dissipated on-die from VDD,
VDDNB, VDDIO, VLDT, VTT and VDDA.
- The processor thermal solution should be designed to accommodate thermal design power (TDP) at
AMD themselves literally say that they measure TDP, by measuring all the power consumed by the various core voltage supply rails. This means that as far as AMD is concerned, all the power consumed by the CPU turns into heat.Edited by AmericanLoco - 6/5/16 at 9:22pm