By Margery Conner
[Source: edn Electronics Design, Strategy, News]
Most of the publicity in the last day or two about Google’s and Intel’s spearheading an industry initiative to increase computing power efficiency has stated that the goal is at least 90% efficiency for power supplies, but there hasn’t been much (any?) explanation of how this goal is to be met. Well, at last year’s IDF, Google Lab engineers presented their Simple Plan to increase efficiency. This is the gist of it:
Current computers and servers that descend from the original IBM PC of 1981 have power supply specifications that mandate multiple voltages: +/-12V, 5V, and 3.3V, resulting in four distinct supplies, when all you really need is one 12V supply and then sprinkle around some voltage regulator modules (VRMs) on the motherboard to create lower voltages as needed.
Google claims, and it’s easily believable, that this change increases power efficiency to 85% at virtually no cost. By using “higher-quality components,” the efficiency goes to 90%.
Google says the change is at “virtually no cost,” but I’d guess that it’s actually a cost savings. Look at the picture of a typical 4-in-1 power supply here:
…versus a single voltage power supply here (this is the 85% efficiency version):