Overclock.net banner
1 - 12 of 12 Posts

Dreamliner

· ¯\_(ツ)_/¯
Joined
·
636 Posts
Discussion starter · #1 ·
I have a Corsair HX1050 that decided to self destruct a couple night ago...using my PC and poof, just shut off. Fans wouldn't spin and the MB just beeped. Replaced the PSU with a backup and it fired right up. Corsair is sending me an HX1000i as a replacement (I tried for HX1200i arguing that I'd lose 50 watts with the HX1000i but I don't think I won that debate).

Does it make sense to use multiple power cables even if one would do or is everything connected inside behind the connectors anyway?

Also, I don't understand the multi-rail and single-rail stuff.

Finally, is Corsair Link just a gimmick?

https://www.corsair.com/ww/en/Power/hxi-series-config/p/CP-9020074-NA

I need 4 PCIE, 9 SATA and 1 Molex. I'm trying to figure out where to plug what to make sure the PSU has an 'even' load if that's even a thing...
 
Single rail means no load balancing -- all of the voltages are converted from the 12v rail anyway with modern power supplies.

For the SATA you're going to need all three cables anyway.

For the PCIe I would use all four cables, one connector per cable. Less heat per cable and cleaner power. JayzTwoCents did a test and using separate PCIe cables enabled a higher overclock.

Just installed an HX1000i in Heatripper Threadkiller. Never used Corsair Link and and don't plan to in the future.
 
Discussion starter · #3 ·
Single rail means no load balancing -- all of the voltages are converted from the 12v rail anyway with modern power supplies.

For the SATA you're going to need all three cables anyway.

For the PCIe I would use all four cables, one connector per cable. Less heat per cable and cleaner power. JayzTwoCents did a test and using separate PCIe cables enabled a higher overclock.

Just installed an HX1000i in Heatripper Threadkiller. Never used Corsair Link and and don't plan to in the future.
I just saw your build thread. Does the HX1000i come single rail by default? It looks like you can configure it...?

JayzTwoCents is one cool cat and I love his water builds. I just am too nervous to add water to a PC and can't get over the cost hurdle or hassle factor. Stability of air and enough savings to buy better gear stopped me from going further. Heck, as much as I like the look of all the water, led and crazy mods people are doing now I think the upkeep would drive me nuts. I like the quiet windowless box under the desk I kick every so often. :)

Now I just need that replacement PSU to get here!
 
MNMadman pretty summed it up. a single rail is easy peasy, a multi rail system can get overloaded on one rail w/o the others being used.

i've helped (or seen others help) more than a handful of folks that need to go trial and error to figure out what cable to what PSU connection.

ie if 6 pci-e connection w/4 rails - the cpu is *suppose to* have it's own rail - so that leave two connections for the other 3 rails. it's not always easy to see what pair is match (same rail).

peripherals such as sata, molex connections don't need the concern since those aren't sucking down 100+ watts like the cpu/pci-e cables.
 
I have the RM1000i Corsair I don't really understand the switch to multi rail it is single rail by default then you have to install the Corsair link to switch to multi rail
BUT there is no documentation to which rail is to relation to which plug so forgot about it and uninstalled the Link software it is pretty CPU hungry like 3-4% consistently running in the background.
It is pretty gimmicky the link software cool to see your current draw and efficiency but HWMonitor you can see all of that anyway
 
Discussion starter · #6 ·
MNMadman pretty summed it up. a single rail is easy peasy, a multi rail system can get overloaded on one rail w/o the others being used.

i've helped (or seen others help) more than a handful of folks that need to go trial and error to figure out what cable to what PSU connection.

ie if 6 pci-e connection w/4 rails - the cpu is *suppose to* have it's own rail - so that leave two connections for the other 3 rails. it's not always easy to see what pair is match (same rail).

peripherals such as sata, molex connections don't need the concern since those aren't sucking down 100+ watts like the cpu/pci-e cables.
Yeah. I was looking at the connectors on the HX1000i and it got me thinking which connector should go where. That's why I made this thread. I've been doing some mining so I was trying to figure out which pairs to use for PCIE to not over stress the PSU. I see the 6 connectors on the PSU and I'm assuming top and bottom is a pair, left to right. My cards need 2 PCIE power connectors, each. I was thinking about getting a third if I found a good deal but I'm not certain where I'd get the power from?

Also, I've never really figured out an elegant way to route the PCI-E power cables. They just kinda loop over the card and back. The EVGA Power Link seems to be the best way to handle that but my case is too tight to fit them, I already have to insert the cards at an angle just to get them in the case.
 
just stay (default) single rail and use a cable for each connection.

problems will occur using multi rail AND a single cable for each card; using just two of six the PSU connections- its easy to put both cards on the same rail.

i hope i am not confusing you.
 
Discussion starter · #8 ·
just stay (default) single rail and use a cable for each connection.

problems will occur using multi rail AND a single cable for each card; using just two of six the PSU connections- its easy to put both cards on the same rail.

i hope i am not confusing you.
Makes sense to me. I think Corsair is sending me a refurbished unit. Is there any chance it could be configured as multi-rail out of the gate (perhaps configured that way by the previous owner or something)?
 
i would hope its factory default. so maybe give corsair link a shot just to check it out. but still using 4 cables to two cards is practically fool proof.
 
it is factory default to single rail you need to install the Link software to switch it to multi rail


I used 1 cable per GPU no problems just there is no information on multi rail use almost like you can swith to "virtual" multirail just it makes no sense I swiched to multi rail mode to see if I could hit OCP still no difference like the multirail doesn't exist
 
There won't be any problem if you use one cable per card. I'm guessing that 99% of people do it that way simply because it means less cable clutter. I'm not one of those people as I prefer the cleaner power delivery, even if there is only a slight difference. When I ordered the CableMod ModMesh custom cables for my HX1000i I deliberately chose to order one 8-pin PCIe cable and one 6-pin PCIe cable, each with one connector. I could have saved myself $15 by doing it the single-cable way. Just one of my quirks...
 
Discussion starter · #12 ·
There won't be any problem if you use one cable per card. I'm guessing that 99% of people do it that way simply because it means less cable clutter. I'm not one of those people as I prefer the cleaner power delivery, even if there is only a slight difference. When I ordered the CableMod ModMesh custom cables for my HX1000i I deliberately chose to order one 8-pin PCIe cable and one 6-pin PCIe cable, each with one connector. I could have saved myself $15 by doing it the single-cable way. Just one of my quirks...
I thought those CableMod cables were much more expensive. They sure look good!

I agree with you about running separate power cables. Honestly, I didn't even know they made those double-plug cables. Besides, I bet you like seeing all those pretty CableMod cables. I would. :)
 
1 - 12 of 12 Posts