Overclock.net › Forums › Distributed Computing › Distributed Computing - General › Does this type of riser exist?
New Posts  All Forums:Forum Nav:

Does this type of riser exist?

post #1 of 7
Thread Starter 
I'm going to guess this will be a simple 'no', since my particular needs are rather unique, but I'll ask anyway. For those familiar with flexible x1 risers that use a USB 3.0 cable to make the extension, is there a x8 or x16 variant? No ribbon cables. Just a multitude of 3.0 cables.

Why? I'm wondering if it is possible to have a 1.5 meter long riser, but no ribbon cable riser comes in lengths longer than 60cm, nor can such a wide ribbon cable be routed easily.

Assuming the answer is no, what would be the reason? Lack of demand, or physically impossible? I've seen x1 risers go to similar lengths using USB cables, so it's not a question of maximum length being exceeded. Could it be that there would be too many USB cables coming out of each riser? I know that x16 has 162 contacts, and with 9 wires inside a USB 3.0 cable, you would be looking at up to 162 / 9 = 18 separate USB cables per riser. That being said, x1 has 36 contacts, yet a single USB cable is sufficient, so I'm guessing there are duplicate contacts that can be combined to reduce the total number of wires required. Alternatively, could other cables with more wires within be used? HDMI, DisplayPort and Thunderbolt cables all have about 20 contacts, right? Not sure if that translates to 20 individual wires in the cable, or even if signal interference becomes an issue.

At the end of the day, is there any way to have a video card located 1.5 meters from its motherboard? What about those external video card docks? I'm unfamiliar with them, so I have no real information about them, but if they use USB 3.0, they would still be limited to 5 Gbps; a small fraction of what PCI-e 3.0 x16 or x8 can offer.
post #2 of 7
I have personally never seen what you're looking for (a whole bunch of USB3 cables for an X16 PCI-E riser.), and the best reason why I can personally come up with is the fact there's very little demand, and the fact that PCI-E signals I believe start to degrade slightly after half a meter of cable. Physical contacts arent the issue, as each 3.0 cable could do a lane and a half of PCI-E, possibly 2 lanes, with the outer sheath and connector for grounding.

The best thing I've seen was a PCI-E x4 to quad X1 splitter that sent the signal out from the slot, through a secondary chip (likely a bridge chip of some kind) then out through USB cables to the x1 electrical slots on the external device maybe half a meter or so away, maybe more, and that overall device was one of those things where if you had to ask for the price it was too expensive.


The best thing I can suggest if you need an x16 card 1.5M away from the board is to get a bunch of ribbon cables and daisy chain them together to get the desired length.
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
post #3 of 7
The signal starts to fall apart after about 30cm, though most chips are fairly tolerant and let you get away with twice that. For longer distances, the signal needs to be re-generated/amplified, which is expensive. If you look closely at all of the USB PCI-E 1x extenders, they all ship with short USB 3 cables.

The external graphics docks, like the Razer Core, use Thunderbolt 3, which is basically a long range version of PCI-E 3 4x. Reviews have put the loss of performance at ~10% over running it in a full 16x slot.
post #4 of 7
Thread Starter 
Quote:
Originally Posted by Cyrious View Post

The best thing I've seen was a PCI-E x4 to quad X1 splitter that sent the signal out from the slot, through a secondary chip (likely a bridge chip of some kind) then out through USB cables to the x1 electrical slots on the external device maybe half a meter or so away, maybe more, and that overall device was one of those things where if you had to ask for the price it was too expensive.

Yeah, I've seen those. Amfeltec sold those for $200 a piece. It was one of the first things I looked at, back when I was under the false impression that GPU distributed computing projects required almost no bandwidth, like cryptocurrency. Turns out that the PCI-e bandwidth gets saturated very easily. Mid-range cards would be looking at performance losses below 2.0 x8, and apparently the GTX 1080 chews through 60% of 3.0 x16

At the moment, I'm waiting for a reply from another company on something that might work. Here's an example of one such product. While I'm not entirely sure how it works, from the looks of things, you can plug one end into the motherboard, and then a cable, which can be offered in lengths of up to 7 meters, would connect to the second card that sits on a dual slot backplane. It sounds ridiculously expensive, and it probably is, but I sent them an email requesting prices anyway, and also to confirm if this would work as I'd need it to. Most likely will be prohibitively expensive.

What I'm getting from this is that if cable lengths can be 7 meters, why are flexible risers limited in length? Could it simply be that they use the thinnest, absolute bare minimum diameter of wire to get the job done, resulting in excessive resistance? If that's the case, would simply using thicker wire allow you to extend the length?
post #5 of 7
Quote:
Originally Posted by hiigaran View Post

What I'm getting from this is that if cable lengths can be 7 meters, why are flexible risers limited in length? Could it simply be that they use the thinnest, absolute bare minimum diameter of wire to get the job done, resulting in excessive resistance? If that's the case, would simply using thicker wire allow you to extend the length?
I think its due to the fact PCI-E is a rather delicate spec in terms of timings and signal transmission is why the length of physical PCI-E lanes is limited. For an unshielded cable, the signal starts weakening and degrading due to capacitance and other issues after about half a meter, which is why all common long length riser ribbons usually go about that far. There's also the fact there's a large number of required cable pairs for the lanes, and even more for grounds, which makes things more complex.

I know someone several months back did a quick review using a bunch of PCI-E ribbon cables and an X16 video card to see how performance degraded over distance through ribbon cables, and his little review showed that there was a small (~1% or so) but detectable performance loss. These cables though were shielded somewhat (the ribbons were wrapped in foil tape from the factory) so its possible than an unshielded cable of that size in an electronically noisy environment would suffer much worse.

The device you linked me to likely gets around the signalling issue by using the chips on the cards as amplifiers and signal cleaners before passing said signals on to the device backplane/motherboard for final transport to the desired device, and deals with actual transmission from device to device through fiber optics of some kind if the size of the connector blocks on the cards are anything to go by.
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
post #6 of 7
Thread Starter 
That's what I was thinking as well. So if additional components are added to the mix, I wonder if the added latency would reduce performance.
post #7 of 7
Quote:
Originally Posted by hiigaran View Post

That's what I was thinking as well. So if additional components are added to the mix, I wonder if the added latency would reduce performance.
There would be some performance hit vs running it directly in the slot yes, as bridge chips and lane splitters all impose some sort of latency penalty to the bus, and switching it from electrical to optical, transmitting it, turning it back into electrical, then doing it again on the return trip, all while ensuring signal integrity would impose quite a bit more.

So yes, there will be a performance hit of some kind.
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
Leviathan
(17 items)
 
Charred
(10 items)
 
 
CPUMotherboardGraphicsGraphics
Xeon E5-2690 Biostar TPower X79 PNY GTX 660 2GB HP GT 440 OEM 
RAMHard DriveHard DriveHard Drive
Gskill Ripjaws 4x2GB 1600mhz Seagate Barracuda 500GB Seagate Barracuda 1.5TB Western Digital Caviar Blue 640GB 
Hard DriveCoolingOSMonitor
Patriot Pyro 60GB Xigmatek Gaia Windows 7 Ultimate Acer S230HL 
MonitorKeyboardPowerCase
Princeton 1280x1024 19" Logitech K120 Seasonic G550 Xclio Nighthawk 
Mouse
Logitech MX310 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Distributed Computing - General
Overclock.net › Forums › Distributed Computing › Distributed Computing - General › Does this type of riser exist?