Overclock.net banner

1 - 20 of 30 Posts

557 Posts
Discussion Starter #1

Table of Contents

01. 2013-NOV-13: First Hardware Testing & The Noctua NH-U9DX 1366
02. 2013-NOV-16: Temporary Ghetto Setup, OS Installed
03. 2014-APR-01: PSU Mounting & LSI Controller Ghetto Test
04. 2014-APR-02: The Disk Racks
05. 2014-APR-08: Chipset Cooling & Adventures in Instability
06. 2014-APR-09: Disk Ventilation
07. 2014-APR-11: Fan Unit for Main Compartment Ventilation
08. 2014-APR-12: Storage Topology & Cabling
09. 2014 APR-26: Storage and Networking Performance
09. 2014-MAY-10: Sound Dampening & Final Pics

Wait, What, and Why?

So, yeah, another build. Another server, to be precise. Why? Well, as nice of a
system ZEUS is, it does have two major shortcomings for its use as a server.

When I originally conceived ZEUS, I did not plan on using ZFS (since it was not
yet production-ready on Linux at that point). The plan was to use ZEUS' HDDs as
single disks, backing up the important stuff. In case of a disk failure, the
loss of non-backed up data would have been acceptable, since it's mostly media
files. As long as there's an index of what was on the disk, that data could
easily be reaquired.

But right before ZEUS was done, I found out that ZFS was production-ready on
Linux, having kept a bit of an eye on it since fall 2012 when I dabbled in
FreeBSD and ZFS for the first time. Using FreeBSD on the server was not an
option though since I was nowhere near proficient enough with it to use it for
something that important, so it had to be Linux (that's why I didn't originally
plan on ZFS).

So, I deployed ZFS on ZEUS, and it's been working very nicely so far. However,
that brought with it two major drawbacks: Firstly, I was now missing 5 TB of
space, since I had been tempted by ZFS to use those for redundancy, even for our
media files. Secondly, and more importantly, ZEUS is not an ECC-memory-capable
system. The reason this might be a problem is that when ZFS verifies the data on
the disks, a corrupted bit in your RAM could cause a discrepancy between the
data in memory and the data on disk, in which case ZFS would "correct" the data
on your disk, therefore corrupting it. This is not exactly optimal IMO. How
severe the consequences of this would be in practice is an ongoing debate in
various ZFS threads I've read. Optimists estimate that it would merely corrupt
the file(s) with the concerned corrupt bit(s), pessimists are afraid it might
corrupt your entire pool.

The main focus of this machine will be:
  • room to install more disks over time
  • ECC-RAM capable
  • not ridiculously expensive
  • low-maintenance, high reliability and availability (within reason, it's still
    a home and small business server)


The component choices as they stand now:
  • M/B: Supermicro X8DT3-LN4F
  • RAM: 12 GB ECC DDR3-1333 (Hynix)
  • CPUs: 2 x Intel L5630 Quad Cores, 40 W TDP each
  • Cooling: 2 x Noctua NH-UD9X 1366 (yes, air cooling!
  • Cooling: A few nice server double ball bearing San Ace fans will also
    be making an appearance.
  • Case: InWin PP689 (will be modded to fit more HDDs than in stock config)
  • Other: TBD


Instead of some uber-expensive W/C setup, the main part of actually building
this rig will be in modifying the PP689 for fitting as many HDDs as halfway
reasonable as neatly as possible. I have not yet decided if there will be
painting and/or sleeving and/or a window. A window is unlikely, the rest depends
mostly on how much time I'll have in the next few weeks (this is not a long-term
project, aim is to have it done way before HELIOS).

Also, since costs for this build should not spiral out of control, I will be
trying to reuse as many scrap and spare parts I have laying around as possible.


More pics will follow as parts arrive and the build progresses, for now a shot of the

(click image for full res)

That's all for now, thanks for stopping by, and so long.

557 Posts
Discussion Starter #2
First Steps

Hardware Tested

M/B, CPUs and memory have all arrived. The CPUs and M/B seem to be working OK.
One of the memory modules seems to be having a bit of trouble being recognized,
the other five work fine. I'll see if it's really defective or if it's just the
IT gods screwing with me a bit.

The Noctua NH9DX 1366

The Noctua NH-U9DX 1366 is a cooler from Noctua's series specifically made for
Xeon sockets. For those who don't know, LGA1366 sockets have an integrated
backplate, just like LGA2011, which makes them much more convenient than their
desktop counterparts. It's quite a nice and sturdy backplate, too, in fact it's
among the most solid backplates I've come across yet. This does, however,
require a slightly different mounting system. You just have four screws which
you bolt directly into the plate.

Aside from that, the cooler is identical to its desktop counterpart as far as I
know. Why the 92 mm version? For one thing, it was in stock, unlike the 120 mm
version of this cooler. Also, the CPUs only produce 40 W TDP each, so there
really is no need for high-end cooling. And as a bonus, I got supplied some
awesome San Ace fans with my case, which also happen to be 92 mm.

The Noctua fans which come with the cooler are just 3 pin fans (the newer models
of this cooler for LGA2011 come with a PWM fan I think), but the San Ace fans I
got with my case are actually PWM controlled! Since the M/B has a full set of
PWM headers (8, to be exact, how awesome is that!?
) I will try the San Ace
fans and see how they play on lower rpm's (they run at 4,800 rpm on full speed
). This does not need to be a super-silent machine since it will be in its
own room, and since I really like the San Ace fans with regards to build quality
(and I'm a total sucker for build quality) I'd love to use them for this. The
Noctuas would admitteldy be better suited, but I'll see how things go with the
SA's first.

The Box

Unlike its shiny desktop counterparts, the NH-U9DX comes in a nice and subtle
(but sturdy) cardbord box with a simple sticker on it. I must admit I like this
box more than the shiny ones.

(click image for full res)


How it looks packaged...

(click image for full res)

... and out in the open.

(click image for full res)

Noctua Pr0n

A few glory shots of the cooler itself...

(click image for full res)

(click image for full res)

The San Ace 9G0912P1G09

There is no info about this fan on the web, I'm presuming it's something San Ace
makes specifically for InWin in an OEM deal.

I've hooked it up to a fan controller and got a max reading of 4,800 rpm, and
the Supermicro board turns them down to ~2,200 rpm on idle. They seem to be very
good fans, you can only really hear the sound of the air moving, no bearing or
motor noises so far. Also, they are heavy (~200 g per piece), which is always
nice for a build quality fetishist such as myself.

Note: Hooking such a fan up to a desktop board as its power source would not be
advisable, they are rated for 1.1 A and might burn out the circuits on a desktop
board. Server boards usually have better fan power circuitry since they are
desinged with high-performance fans in mind. Just as a side note.

(click image for full res)

Compared to the Noctua fan which comes with the coolers. I might still go with
the Noctuas, but it's not the plan at the moment.

(click image for full res)

The Noctua NH-U9DX 1366 San Ace Edition

I had to improvise a bit with mounting the San Ace's to the tower. The clips
which you'd use with the Noctua fans rely on the fan having open corners, which
the San Ace's do not. Ah well, nothing a bit of cotton cord can't fix.

(click image for full res)

And the current config in its full glory:

(click image for full res)

Side note: The coolers were actually more expensive than the CPUs. :lol:

That's it for now, thanks for stopping by.

back at it
4,765 Posts
Subbed for another build from you man...
Great work

557 Posts
Discussion Starter #4
Originally Posted by barkinos98 View Post

Subbed for another build from you man...
Great work
Thanks, I appreciate that!

I got the RAM Thursday and the rig has passed several rounds of memtest by now, so I think
the hardware is OK.

557 Posts
Discussion Starter #5
Up and Running, Ghetto Style

Hardware Validation

I've put the system together temporarily to validate the M/B, CPU and memory, so
far all seems good. A minimal Arch Linux setup has been installed and is
successfully running BOINC at the moment.

I'm not running BOINC as a hardware validation tool, that's not what it's
designed to do. I have (mostly) validated the hardware and am now just running

Just to clarify.


Gotta love low-power CPUs, core temps after about an hour of running BOINC on
all cores are:
31 C, 31 C, 35 C, 30 C,
32 C, 26 C, 29 C, 31 C

(click image for full res)

Feast on the Ghetto-ness!


(click image for full res)

Next Up

I'll need to order some supplies for modding the front part of the case for more
HDDs. Still not sure if I'll paint it. Can't paint it in the apartment, and
temps in my workshop in the basement have dropped significantly since we now
have just a few degrees above freezing outside, so conditions for spray painting
are not optimal at all at the moment.

450 Posts
Would you know any sites/ pages that I could find some good info on the whole server topic. I have a very faint idea of the whole thing but the specifics are something that I'm keen on studying. You seem to know your way around servers and wanted to ask where you acquired your knowledge.

Also I'll be having a look at this interesting build too.

557 Posts
Discussion Starter #7
Originally Posted by Jakewat View Post

Would you know any sites/ pages that I could find some good info on the whole server topic. I have a very faint idea of the whole thing but the specifics are something that I'm keen on studying. You seem to know your way around servers and wanted to ask where you acquired your knowledge.

Also I'll be having a look at this interesting build too.
Servethehome has some nice articles. They don't just look at the latest and greatest hardware, but also
stuff of interest to the home server enthusiast which you can get on eBay for cheaps (similar to the components
for this build). Also, just googling around for specific questions I might have.

Aside from that I have also found browsing the catalogues of server hardware vendors very helpful (Supermicro,
Tyan, Asus, LSI, Intel). It gives you a general idea of what components are available for what platform, and when
you find something that might be of interest to you, you can do more research on that specific part via Google and
see if you can get it on eBay (it can be kind of tricky to find the right stuff on eBay with generic searches such as
'lga 1366 server board' or something like that, lots of stuff gets missed which you only find with specific searches
such as 'Supermicro X8DT3' etc., you get the idea).

TBH though it's not all that complex, it's more about getting yourself acquainted with product lines which aren't
that known to the normal user (so, for example, finding out what kind of Xeons are available for the LGA1366
socket and which ones might be right for your needs), the procedure is pretty much the same as when a new
desktop product line is launched. The primary caveat is of course that info (reviews etc.) is more difficult to find,
sometimes even downright impossible, so you need to rely much more on spec sheets, manuals and the
occasional tidbit to find out whether or not some component (say, a M/B) is right for you. The nice part about
pro-grade hardware is that the documentation is often pretty good (for example, Supermicro's manual for my
motherboard has been very helpful), so you can often get a good idea of a product's capabilities by doing some
careful reading.

As for the software side of things (which could be argued to be the more important aspect): I've been using exclusively
Linux for quite a few years now (not to sound elitist, it's just a statement of fact
), and for most server-like tasks
there are some pretty decent resources available for Linux (I'm assuming for Win Server as well, but I'm not up-
to-date on that front), so usually I just have a look at the Arch Wiki, and if it's not in there I look around google
and Youtube to see if there are any tutorials available for what I wish to do (for instance, setting up a DHCP
server on my machine, although I'm not yet sure if I'll actually do that). On a side note: That's also pretty cool
about server-grade hardware: Good Linux support.

Then there remains the topic of networking and security, for which I have found Eli the Computer Guy's channel
very helpful. Although I've only just started to delve deeper into that side of things, so there's still lots to learn.

For example, I recently did some research on Cisco since, for one thing, we have one of their routers, and secondly,
I was thinking about buying a managed switch from them. Then I found out that they'd done backdoor firmware
updates for a few of their router series and have been involved in quite a few controversies (for example, they
seem to have been helping China build its great firewall). There are also rumours that they've been helping
the NSA with their snooping around (say, sending info about their routers' users browsing and downloading
habits to the man?), although I must say that these are just rumours and I haven't been able to find anything
definite on that topic.

Still, I'm paranoid enough that this has motivated me to avoid prebuilt closed-source networking equipment
and now I want to implement my own solution to make sure my equipment only really does what I actually
tell it to do. However, I'm not yet far along in my research (and finances
) to implement something proper in
that regard. But yeah, I feel an urge to get rid of my Cisco router/access point and build my own equipment,
for which there remains a significant amount of work to do.

Sorry for the long post, but that's what I do, apparently


450 Posts
The long posts are what separates you from most other forumers, which IMO is a good thing, not many people have the time and patience to do what you do and should be appreciated.
Anyway, thanks for the info. I will be sure to do a bit of research on the matter and as I have done to gain my knowledge of desktops, watch, read, and scroll through info

2,704 Posts
On the more casual side of home servers you might want to check out WeGotServed.com - you won't find much detail on larger, more targeted server builds and apps on there (definitely more home-based than enterprise) but if you're looking for a solution for personal cloud, media streaming, backup repository, Intranet, groupware type servers it's a very noob-friendly site. More MS biased than many forums/blogs, but also more accessible than some of the more comprehensive ones - and definitely less of the "if you like a GUI you're a noob and not worth our time" type of thing you can run into on the enterprise-linux targeted sites sometimes.

Not surprisingly, this is also looking like a thorough and unique build/project alpenwasser... subbed!

557 Posts
Discussion Starter #11
Mounting the PSU, Testing LSI Controller


Yeah, it's taking a lot longer to finish this than I'd initially hoped (doesn't
it always with these sort of projects...). But I've been working on it in the last
few weeks and now finally have something to share.

PSU Fitting Issue

The PSU slides into this case through an opening from behind, and since the case
isn't really made for normal ATX-sized PSUs (but server PSUs instead, it's a
rather tight fit. To be more specific: The PSU in its stock config does not fit,
the screws for the fan grill and the fan grill itself bump up against the
case. An easy fix though, just needed to remove the fan grill on the PSU.

(click image for full res)

And voilà:

(click image for full res)

Furthermore, since normal server PSUs usually blow air through along their
longitudinal axis, there is no ventilation hole on the case for a fan on the top
of the PSU, which most of today's PSUs have. Not to worry, I still had an old
Aquacomputer rad grill laying around. A bit of dremeling should be able to fix
this problem. Marking for cutting:

(click image for full res)

And with the grill mounted:
(click image for full res)

Bracket Collision

Another issue with mounting a PSU in the case that wasn't intended to be mounted
in this case: The bracket for the PSU does not quite line up correctly with the
power inlet.
(click image for full res)

The power plug can still be connected, but the PSU sits crooked in the case and
it's a huge pain to mount like this.

(click image for full res)

Again, a little bit of cutting was required:

(click image for full res)

To give you an idea of how the PSU fits into the case:
(click image for full res)

(click image for full res)

Needed to hook up some HDDs to test the LSI controller. Looks very ghetto, worked
like a charm.

(click image for full res)

(click image for full res)

Next Up...

Manufacturing the drive cages, the so-called pièce de resistance...

So long

Starting to become a BOFH
10,471 Posts
Ok... I'm interested.

557 Posts
Discussion Starter #13
Originally Posted by legoman786 View Post

Ok... I'm interested.

It's not going to be super pretty (not worth the time and money, especially since I don't
really have either at the moment
), but it will hold 24 3.5" disks in the final config,
which I'm hoping should be enough for the foreseeable future and provide some sort
of eye-candy at least for storage fetishists.

557 Posts
Discussion Starter #14
The Disk Racks

A.k.a. the main part of this undertaking.

As mentioned elsewhere, one of the two main problem of our current server is
that it only has seven HDD slots, and they're already all filled up. The only
way to get more storage is to install larger disks, which isn't really all that

One of the main points of this build was to have more disk slots. The PP689 only
offers four in its stock form, which you can upgrade to a maximum of thirteen
drives. You would need to buy another four-disk enclosure (which btw. I could
not find anywhere to buy), and a five-disk enclosure for the 5.25" bays. Since
13 drives aren't really that many, and since these enclosures aren't exactly
cheap, I decided to go another route.

It took me a while to figure out how to do it, but in the end this is what I
came up with. I had very generous help from one of my neighbours, who has a mill
and a lathe at his disposal, as well as plenty of time (he's a pensioneer

So off we went:

The Mill

(click image for full res)

First Steps

(click image for full res)

(click image for full res)

The Mill can also serve as a drill press. The drill chuck he looks ridiculously
huge when you put a small drill bit into it (he said they didn't have the
smaller model in stock when he needed to buy his, so he went with the large

(click image for full res)

Stumbled upon this when going through my pics. My dog's girlfriend, basically
(she's a labrador and belongs to another one of our neighbours). I was
dogsitting here for an evening a few weeks back. She can be a bit hyperactive at
times, but is a very lovely dog.

(click image for full res)

Drilling and Milling

Lots of holes needed to be drilled for the pop rivets that were going to hold it
all together.

(click image for full res)

(click image for full res)

Milling out the slots for the screwheads:

(click image for full res)

Phase I Complete

The side panels of the disk racks completed. Testing with some broken old HDDs
I had laying around to make sure it all fits as it should. It does.

(click image for full res)

(click image for full res)

(click image for full res)

Rail Detail

This is how the construct looks on the side where you slide in the disks. You
can see the pop rives I used to assemble it, the slots which are pictured being
milled above for the screwheads and the screws on the disks. You can also see
the recesses into which the screws mounted on the HDDs lock. The system works
very well.

(click image for full res)

(click image for full res)


Obviously, 24 HDDs are going to put out some heat, so some ventilation is
required. I'm using six Papst fans for that. The fans will be bolted onto the
panels with some L profiles. Unfortunately, 120 mm fans have 105 mm hole
distance, and HDDs are ~100 mm wide, so it's not possible to mount the fan on
both sides, only two screws can be used. It's not really a problem though, two
screws tightened down nicely give sufficient stability.

(click image for full res)

(click image for full res)

(click image for full res)

(click image for full res)

Mounting Brackets

The panels are mounted to the bottom and top of the case with screws. To have
some leeway in adjusting things, there are slots instead of round holes in some

(click image for full res)


(click image for full res)


(click image for full res)

Fan Mounting

Since the fan screws need to be tightened rather heavily, the screws exert quite
a bit of pressure on the fan frames. To prevent the fan frames from being
crushed and/or breaking, we made some brass bushings that take the brunt of the

(click image for full res)

And Mounted

And finally the disk racks are mounted inside the case.

(click image for full res)

(click image for full res)

Disk Mounting

The Disks just slide into the slots and lock into place in the recesses you can
see above. Since I can't tighten the screws, I'm using Loctite to prevent them
from falling out due to vibration. I tried to get some screws similar to those
Lian Li use for their HDD mounting, but the only ones I could find were so
expensive that they'd have cost me more than 100 USD. So yeah, nope...

(click image for full res)

There's still lots to do, but that was by far the most work intensive part of
this build, took us quite a while to get it done. And no, it won't be painted or
anything, the server will stand in a closed room in our appartment anyway. I'd
have loved to make it all pretty and nice, but at the moment I just don't have
the time.

So long, and until next time.


557 Posts
Discussion Starter #15
Chipset Cooling, Adventures in Instability

As some may be aware, I originally had some issues when trying to
get this machine to run stable. While stress testing with mprime,
it repeatedly and reproduceably crashed after less than an hour,
sometimes even already after a few minutes. Each time after
crashing, it took me several tries and about 10 to 20 minutes to
get the board to POST again.

After some troubleshooting and running a few diagnostics, it
turned out that the 5520 chipset was running really hot. It's
temperature threshold as indicated by the system is 95 degrees
Celsius, and when I was last able to check on it before a crash,
it had already passed 85 deg C, so I suspected that it was bumping
up against the threshold, upon which the board did an emergency
shutoff and mandated a cooldown period until it would run again.

As an emergency fix, I took the 80 mm San Ace fan that came with
the case and mounted it to the chipset heatsink with some waxed
cotton cord, and voilà somewhere slightly above 70 deg C maximum.

Unfortunately I forgot to take pictures of that rather ghetto
setup before dismantling it again and replacing it with something
more solid, but I have managed to blow up some sections from
another picture that should at least give you an idea of how it

Some Improvisation

Apologies for the horrid picture quality, as said this is a blowup
from a picture of which this section is only a small part.
(click image for full res)

A More Permanent Solution

The chipset heatsink is just your run of the mill alu heatsink held
on by a spring clamp with some hooks.

(click image for full res)

And the naked chipset after cleaning off the TIM. That stuff was a
***** to get off, it had dried up rather significantly.

(click image for full res)

Since the 80 mm fan is quite a bit larger than the chipset
heatsink itself, I needed to either replace the heatsink or modify
it in order to be able to mount the fan to it. I took a
rather crude, but very effective approach: I took an L piece of
aluminium, drilled two holes across the heatsink, cut some M4
threads on those two holes (which worked despite the holes only
going through the fins and not being continuous), then bolted the
L piece to the heatsink with two M4 screws. Works like a charm.

Don't mind the unclean alu bits from the drilling and cutting on
the heatsink between the fins, it wasn't really possible to
properly clean that off and make the holes as clean as one usually

(click image for full res)

And from the other side...

(click image for full res)

The fan itself is held down by three screws, two in the L piece...

(click image for full res)

... and one in the corner of the heatsink itself. The bent fins
are from drilling and cutting the thread, they got a bit
structurally weak at their edges due to that. Doesn't impair
functionality, so not such a big deal since it won't be visible

(click image for full res)

And the whole package:

(click image for full res)

The heatsink unit mounted on the M/B. You need to unmount the fan
to do that. You can again see the bent fins here.

(click image for full res)

And mounted, with the fan:

(click image for full res)

That's it for today, thanks for stopping by.


557 Posts
Discussion Starter #16
Disk Ventilation

Although disks have become quite frugal when it comes to
power consumption these days (at least some of them) and HDD
cooling is not really a huge issue for most people, packing
24 disks as closely together as in this build will cause
heat issues without ventilation. There is no need for 3k rpm
Delta fans though, a whiff of cool air breezing over the
disks should do the job nicely.

For this purpose, as you may have seen in some previous
pics, I have chosen 6 120 mm Papst fans, specifically the
4412 GLL model, and am running them at 7 V. The fans draw
air in through a vent area, and it then gets passed through
the M/B compartment and out the back.

Each fan is fixed to a rail riveted to one of the disk rack
panels with two screws.

You've seen this before, but for completeness' sake I'm
adding the pics of the bushings used to prevent the fan
frames from being crushed to this update as well:

(click image for full res)

I exchanged the copper screws for some silver ones, and in
the process added some dampening foam between the mouning
rails and the fan frame.

(click image for full res)

The whole fan panel assembly:

(click image for full res)

While doing some test runs, I noticed that a rather large
amount of air was being expelled through the front of the
case instead of going into the M/B compartment and out the
back (I wasn't really surprised by this seeing as how open
the front was). Obviously, this was not optimal. So I took a
1.5 mm panel of alu and bolted it to the front.

Because the existing front has a few folds in it, I needed
to do some cutting on the case first.

(click image for full res)

(click image for full res)

(click image for full res)

(click image for full res)

After having done that, I turned my attention to the side
panel, making an opening for the ventilation. I thought of
several ways of doing this, but all of them were a bit more
complicated than I'd have liked them to be. Cutting such a
big hole with a dremel isn't really practical, so I
considered doing it with our jigsaw, but after doing a few
test cuts I didn't really like the result as I couldn't get
a straight enough cut. And the cut needed to be clean,
because there's no space to fit a U channel over the edge,
and I don't really like the idea of covering it up on the

Anyway, the guy just used a nice big angle grinder for the
cut, and since he's a metal worker by trade, it turned out
almost perfectly straight (not 100%, but it's still cut by
hand, after all
). After that, I painted the bare edge
with some model paint to not have the blank metal staring at

I thought about painting the mesh, but at the moment I don't
really have the time, plus I kind of like the look of this
bare piece of alu, so I've left it as-is.

(click image for full res)

The mesh doesn't cover the whole fan area (nor is it very
open with those rather narrow slots), but there is no need
for high-power ventilation here, so this is not a big deal.

(click image for full res)

It's fixed to the inside of the panel with some double-sided
adhesive tape.

(click image for full res)

And in its final config:

(click image for full res)

Drive temperatures hover between 28 deg C and 35 deg C at
the moment, ambient is about 23 deg C.

Until next time,

557 Posts
Discussion Starter #17
Triple Fan Unit

As hinted at earlier, the airflow in this build will go from
the front compartment through the middle wall into the M/B
compartment and out the back.

This is pretty much how the stock configuration works,
except in that the air gets in through the front panel, not
through the side panel.

Unfortunately I forgot to take pics of the stock config, but
luckily tweaktown.com did a review on this case and took them
for me.

Source article where I got the image from can be found here.

In the stock config, the 92 mm fans are mounted inside
some plastic fan cages that allow quick and toolless fan
replacement in case of failure.

(click image for full res)

And without the fan cages:

(click image for full res)

Originally I just screwed the fans to two aluminium L
profile bars.

(click image for full res)

(click image for full res)

(click image for full res)

It was fixed to the middle wall with double-sided adhesive
tape. It's very strong stuff, so the fan unit falling off
was not a concern. Additionally, the tape has some thickness
to it, which should provide some dampening between the fan
unit and the middle wall.

(click image for full res)

Unfortunately, due to some bumps on the middle wall getting in
the way, the tape on the rear angle didn't make proper contact
with the wall. It held, but not very well.

Additinoally, I noticed that there were rather strong
vibtrations on the middle wall. It turned out that the tape
did indeed offer some decoupling, but it also did not
offer any additional strength to the middle wall (i.e. no
additional stiffness), which meant the wall could easily

(click image for full res)

So, I took the unit out, and while I was at it, I also cut
out some recesses for the fans which I didn't bother doing
before. I also put some dampening foam between the fans and
the alu angles.

(click image for full res)

Aaand of course I mounted the fans the wrong way
round. Sigh.

(click image for full res)

Disassemble again, reassemble.

(click image for full res)

Also: Foam between the alu angles and the wall itself:
(click image for full res)

This time I bolted it to the wall with some screws. Much
more solid now, no more vibrations.

(click image for full res)

How it looks from the other side:

(click image for full res)


557 Posts
Discussion Starter #18
Storage Topology & Cabling

Storage Topology

In case you can't read the text, the full res version should
be more easily readable.

(click image for full res)

The idea behind the storage topology is based on the concept
Any one of the three LSI controllers can fail and I still
have all my data available.

You'll see below that I haven't yet gotten around to
installing the Velociraptor.

I use coloured zip ties to mark the cables that go to the
different controllers.

BLUE = controller 0
YELLOW = controller 1
GREEN = controller 2


There isn't really any space to hide the cables, so this was
rather tricky and required three attempts until I was
satisfied with the result. In the end I hid the extra cable
behind the triple fan unit, good thing they're 38 mm fans,
which makes the space behind them just about large enough to
fit the extra cable bits.

The power cables for the disks are two cables that came with
the PSU and onto which I just put a lot more connectors
while taking off the stock connectors because those were
neither placed in the correct locations nor facing in the
right direction.

Looks harmless, right? Yeah...

(click image for full res)

And the disks:
(click image for full res)

OK then, first try:
(click image for full res)

I soon realized that this wasn't going to work. The problem
was that I had the disks arranged in the same way as the
will be set up in the storage pool layout, so the disks
which go into the same storage pool were also mounted below
each other. Sounds nice in theory, but if you want to
have disk from each pool distributed among the different
controllers, you'll get quite the cable mess.

(click image for full res)

(click image for full res)

Second Try

Next try, this time I arranged the disks to that the cables
to the controllers could be better laid out. Since I wanted
to set up all the cables for all the disk slots, even ones
that will stay empty for now, I had to shuffle the disks
around when laying out the cables.

(click image for full res)

(click image for full res)

(click image for full res)

Better. But I still wasn't quite happy, mainly because...

(click image for full res)

(click image for full res)

(click image for full res)

... of this:

(click image for full res)

Third Try

This time I made sure the cables stayed tidy on both ends
while hiding the mess (which cannot be avoided since all
cables are the same length but lead to different end points,
obviously) behind the triple fan unit.

(click image for full res)

The loop of extra cable length for the top cable loom:

(click image for full res)

And the cable loom for controller 0, from the disk side...

(click image for full res)

and the M/B side. Much better IMHO.

(click image for full res)

The bottom controller had a bit more extra cable length to hide, so
that part is a bit messier.

(click image for full res)

And the middle one:
(click image for full res)

Tada! While not perfect (I'd need longer cables for that to
make cleaner runs, but I'm not buying more cables just for
the sake of that for a build that has a closed side panel),
with this iteration of my cabling I'm now rather happy:

(click image for full res)

(click image for full res)

(click image for full res)

And the other side. Much better than before methinks.

(click image for full res)

(click image for full res)

The SATA cable for the system SSD:

(click image for full res)

And the controller LEDs when there's some activity:

(click image for full res)

Now if you'll excuse me, there's a dinner waiting to be


490 Posts
Very nice build, I love the HDD rack you did, made me think about making something similar for my server. Also I'm pretty sure that I'm gonna steal your chipset cooler idea, it's ridiculous how hot that chip gets.

I'm curious though about your choice to go Arch instead of something more enterprise focused like CentOS or Debian.

557 Posts
Discussion Starter #20
Originally Posted by Aximous View Post

Very nice build, I love the HDD rack you did, made me think about making something similar for my server.
Thanks, I appreciate the compliment!

Originally Posted by Aximous View Post

Also I'm pretty sure that I'm gonna steal your chipset cooler idea, it's ridiculous how hot that chip gets.
Yes, ridiculous indeed. I noticed that when I started playing with my SR-2, the fan on
that thing was very busy. Good thing I now have a watercooler on that chipset, but I
didn't really feel like going W/C for this bulid.

Originally Posted by Aximous View Post

I'm curious though about your choice to go Arch instead of something more enterprise focused like CentOS or Debian.
Two reasons primarily, the first one being my familiarity with it. I know there are people
who tend to go "Ah, Arch, bleeding edge, unstable!" and all that. But in all honesty, I've
been using it as my daily driver on several machines for three years now, and I've had
only one case of actual proper system breakage, and that was related to Gnome. And
even then, I just did a clean reinstall and was back up and running within about two
hours with all my settings and stuff from before.

I know my way around Arch well enough to feel comfortable with it and be efficient-ish
when needing to troubleshoot, which I can't say for Debian (or FreeBSD, which I actually
also considered at some point and did play around with on another machine for a while),
or other distros (I could learn, of course, but at the moment I'm a bit pressed for time with
college and all, I need this thing up and running sooner rather than later).

Secondly, ZFS support is very good on Arch, whereas I've read a few posts around some
forums which said that ZFS under Debian-based distros is... hinky. I haven't personally
tried it, so I can't speak from personal experience on that one though. I have been using
ZFS on Arch on another machine for about nine months now and it's been working very
well, so I thought I'd deploy it on this machine too.

So basically, "If it ain't broke, don't fix it."
1 - 20 of 30 Posts