Overclock.net banner

1 - 13 of 13 Posts

·
Premium Member
Joined
·
16,520 Posts
Discussion Starter · #1 ·
There's been a lot of talk recently about power phases and VRMs, in regards to motherboards and power supplies, and there's a lot of misinformation and misconceptions going around. I'd like to clear the air on this subject a little bit.

First, what is a VRM? A VRM is a voltage regulation module, which is a fancy term for a DC-DC power supply (that is, a PSU that converts one DC voltage into another DC voltage). These VRMs convert the +12V, +5V, and +3.3V current from your system PSU into lower voltage (0.75-2.0V) current for the chips in your system (like the CPU, GPU, northbridge, southbridge, RAM, etc) to use.

These VRMs, as I said, are power supplies, just like the monolithic boxes we're all familiar with under that name. Specifically, they're switch mode power supplies, or SMPS, which use transistors that switch on and off to convert the power. DC-DC SMPS can be based on over a dozen different designs, but nearly all are derived from three basic "topologies": boost, buck, and buck-boost. Boost increases voltage, buck decreases voltage, and buck-boost can output any voltage.

The VRMs on your motherboard are multiphase synchronous buck converters. This is important, but it's a little complicated to explain, so I'll gloss. SMPS have three important characteristics that need to be addressed: efficiency, voltage regulation, and ripple.

Eficiency is the percentage of power output by the PSU, vs power input to the PSU. Basically, a certain amount of energy is wasted as heat during the conversion process. Higher efficiency means less wasted energy.

Voltage regulation is the ability of the PSU to maintain a constant output voltage despite variations in load. With a lower load SMPS tend to output a higher voltage; with a higher load they tend to output a lower voltage. Swings in load can cause unstable fluctuations in voltage. The closer the PSU can keep the output voltage to its nominal value, the better.

Ripple is 'noise' in the electrical current output by the PSU, and is present in all SMPS. It looks like random spikes and dips in the output voltage, on the order of several millivolts, happening thousands or millions of times every second. Most major chips in your computer cannot tolerate ripple with a magnitude greater than 5-15mV, so ripple must be kept low at all costs.

The last consideration is cost of parts and manufacturing, which the manufacturer needs to keep low to make a profit.

As I said, the VRMs on your motherboard are multiphase synchronous buck converters. Buck means that the output is lower than the input; synchronous means that it uses two transistors, instead of a transistor and a diode. Multiphase means that it uses multiple sets of transistors.

Here is an example of a single phase synchronous buck converter:
300px-Synch_buck.PNG


Here is an example of the multiphase circuit used in motherboard VRMs (this is a three phase converter):
300px-Multiphase_buck.PNG


Using multiple phases has several benefits. It leads to higher efficiency, lower ripple, and better voltage regulation in response to load swings. It also allows manufacturers to use multiple cheap transistors, instead of two massive, expensive ones. However, this comes at the cost of increased complexity of the circuit and design.

Having a VRM with more phases leads to cleaner and more efficient power delivered to your CPU or GPU, etc. However, DOES NOT necessarily mean that more power can be delivered.

Consider an ATX power supply, like the Corsair TXv2 650W. It's a nice little power supply with a good double forward implementation (derived from a buck converter). But consider the TXv2 850W. The 850W uses the same design and PCB, and many of the same components, as the 650W. But it is capable of delivering 200W more. Why? Because its transistors (both primary and secondary) have a higher current rating, its capacitors have higher capacitance ratings, and its controllers are tuned for higher wattage operation.

Similarly, a motherboard VRM may be able to supply more or less power vs. another motherboard VRM, regardless of how many phases it has. Higher rated transistors can allow higher output current/power. More phases are not always necessary.

An 8 phase VRM with transistors rated for 4A each may be able to supply 32A of current to your CPU. But a 4 phase VRM with transistors rated for 10A each would be able to supply 40A of current to your CPU. So the 4 phase VRM in that case is more powerful, but the 8 phase VRM would be more efficient and have less ripple.

I've been prattling on for a while, but I hope you see what I mean. More phases in your VRM guarantee cleaner, more efficient power output, but do not always mean the VRM can supply more power.

However, as a practical consideration, many VRMs with more phases can supply more power. I mean, assuming you want to output 64A, it's usually cheaper to use sixteen 8A transistors than four 32A transistors. So more phases makes it cheaper to make the VRM more powerful (usually). So a VRM with fewer phases will often (but NOT ALWAYS) be less powerful, since making it more powerful is more expensive.

I just want to dispel all the misunderstandings about what more VRM phases really mean and do for you.
 

·
Premium Member
Joined
·
16,520 Posts
Discussion Starter · #2 ·
In regards to current events: the GTX590 isn't having VRM failures because it's using 2x4+1 phase VRMs (that's 4 phases for each GPU core, and two phases for VRAM and and ancillary circuits). Four phases could be more than adequate for a GF110 core with high clocks. The reason they're blowing up is that the mosfet transistors used were generic and not rated high enough for the job. Assuming a maximum core current consumption of 200A at 1.00V they'd need eight high quality 50A mosfets. That would have given plenty of clean power and room to overclock. We aren't sure what they actually used, since datasheets are not available, but it's clear they're riding the line.

To stick with a four phase VRM they needed to use unreasonably high rated mosfets (50A is a LOT). Using a five or six phase VRM would have allowed them to use cheaper mosfets, but would have made the card longer, which would remove a marketing point vs. the HD6990. Nvidia elected to use cheaper mosfets, and the four phase design. They wanted to have their cake and eat it too, at the customer's expense.

And that is why GTX590 VRMs are exploding.
 

·
Pink Freud
Joined
·
2,699 Posts
That did clarify a lot for me.

I was thinking that any non-ref 590 model would be as bad as the stock for overclocking unless they added more phases and made an extremely huge card.

I don't know if this question applies, but how hot could a generic (none specific) VRM get before it fail/explode?
I heard it was 130 celcius on my 5870, for example, and I've seen pictures of card working normally at 120 celcius VRM temps
 

·
Premium Member
Joined
·
16,520 Posts
Discussion Starter · #4 ·
Quote:
Originally Posted by EduFurtado;12862348
I don't know if this question applies, but how hot could a generic (none specific) VRM get before it fail/explode?
I heard it was 130 celcius on my 5870, for example, and I've seen pictures of card working normally at 120 celcius VRM temps
Depends on the VRM in question. Usually you have a current rating at 25C (engineering room tempt), at 100C, and occasionally a maximum temperature rating. It varies depending on the mosfets you're talking about.
 

·
Premium Member
Joined
·
9,379 Posts
I'll be adding this to my thread soon (+rep)
 

·
Registered
Joined
·
2,526 Posts
Quote:
Originally Posted by Phaedrus2129;12862238
In regards to current events: the GTX590 isn't having VRM failures because it's using 2x4+1 phase VRMs (that's 4 phases for each GPU core, and two phases for VRAM and and ancillary circuits). Four phases could be more than adequate for a GF110 core with high clocks. The reason they're blowing up is that the mosfet transistors used were generic and not rated high enough for the job. Assuming a maximum core current consumption of 200A at 1.00V they'd need eight high quality 50A mosfets. That would have given plenty of clean power and room to overclock. We aren't sure what they actually used, since datasheets are not available, but it's clear they're riding the line.

To stick with a four phase VRM they needed to use unreasonably high rated mosfets (50A is a LOT). Using a five or six phase VRM would have allowed them to use cheaper mosfets, but would have made the card longer, which would remove a marketing point vs. the HD6990. Nvidia elected to use cheaper mosfets, and the four phase design. They wanted to have their cake and eat it too, at the customer's expense.

And that is why GTX590 VRMs are exploding.
In other words, Nvidia effed up the components selected for proper use
tongue.gif
 

·
Premium Member
Joined
·
16,520 Posts
Discussion Starter · #7 ·
Quote:
Originally Posted by ezveedub;12862393
In other words, Nvidia effed up the components selected for proper use
tongue.gif
More or less, yeah.
 

·
Registered
Joined
·
35 Posts
So, in a nutshell. Get the 2x6950s, flash them to 6970s, do a mild overclock; ???? and PROFIT.

It'll be cheaper, offer 6990 level performance and do better than a gtx 590, and also have more features available; like Eyefinity, more RAM, cooler and drawing less power overall.
 

·
Premium Member
Joined
·
16,520 Posts
Discussion Starter · #9 ·
Bumped in light of the GTX590 revision.
 

·
Premium Member
Joined
·
4,635 Posts
Quote:
Originally Posted by Phaedrus2129;13509579
Bumped in light of the GTX590 revision.
There's a rev. GTX590??

So what's new on the power circuitry??

CHEERS..
 

·
Premium Member
Joined
·
9,379 Posts
1 - 13 of 13 Posts
Top