Overclock.net banner

APOLLO (2CPU LGA1366 Server | InWin PP689 | 24 Disks Capacity) - by alpenwasser

5486 Views 29 Replies 7 Participants Last post by  alpenwasser
4
aw--apollo--logo.png

Table of Contents

01. 2013-NOV-13: First Hardware Testing & The Noctua NH-U9DX 1366
02. 2013-NOV-16: Temporary Ghetto Setup, OS Installed
03. 2014-APR-01: PSU Mounting & LSI Controller Ghetto Test
04. 2014-APR-02: The Disk Racks
05. 2014-APR-08: Chipset Cooling & Adventures in Instability
06. 2014-APR-09: Disk Ventilation
07. 2014-APR-11: Fan Unit for Main Compartment Ventilation
08. 2014-APR-12: Storage Topology & Cabling
09. 2014 APR-26: Storage and Networking Performance
09. 2014-MAY-10: Sound Dampening & Final Pics

Wait, What, and Why?

So, yeah, another build. Another server, to be precise. Why? Well, as nice of a
system ZEUS is, it does have two major shortcomings for its use as a server.

When I originally conceived ZEUS, I did not plan on using ZFS (since it was not
yet production-ready on Linux at that point). The plan was to use ZEUS' HDDs as
single disks, backing up the important stuff. In case of a disk failure, the
loss of non-backed up data would have been acceptable, since it's mostly media
files. As long as there's an index of what was on the disk, that data could
easily be reaquired.

But right before ZEUS was done, I found out that ZFS was production-ready on
Linux, having kept a bit of an eye on it since fall 2012 when I dabbled in
FreeBSD and ZFS for the first time. Using FreeBSD on the server was not an
option though since I was nowhere near proficient enough with it to use it for
something that important, so it had to be Linux (that's why I didn't originally
plan on ZFS).

So, I deployed ZFS on ZEUS, and it's been working very nicely so far. However,
that brought with it two major drawbacks: Firstly, I was now missing 5 TB of
space, since I had been tempted by ZFS to use those for redundancy, even for our
media files. Secondly, and more importantly, ZEUS is not an ECC-memory-capable
system. The reason this might be a problem is that when ZFS verifies the data on
the disks, a corrupted bit in your RAM could cause a discrepancy between the
data in memory and the data on disk, in which case ZFS would "correct" the data
on your disk, therefore corrupting it. This is not exactly optimal IMO. How
severe the consequences of this would be in practice is an ongoing debate in
various ZFS threads I've read. Optimists estimate that it would merely corrupt
the file(s) with the concerned corrupt bit(s), pessimists are afraid it might
corrupt your entire pool.

The main focus of this machine will be:
  • room to install more disks over time
  • ECC-RAM capable
  • not ridiculously expensive
  • low-maintenance, high reliability and availability (within reason, it's still
    a home and small business server)

Hardware

The component choices as they stand now:
  • M/B: Supermicro X8DT3-LN4F
  • RAM: 12 GB ECC DDR3-1333 (Hynix)
  • CPUs: 2 x Intel L5630 Quad Cores, 40 W TDP each
  • Cooling: 2 x Noctua NH-UD9X 1366 (yes, air cooling!
    redface.gif
    )
  • Cooling: A few nice server double ball bearing San Ace fans will also
    be making an appearance.
  • Case: InWin PP689 (will be modded to fit more HDDs than in stock config)
  • Other: TBD

Modding

Instead of some uber-expensive W/C setup, the main part of actually building
this rig will be in modifying the PP689 for fitting as many HDDs as halfway
reasonable as neatly as possible. I have not yet decided if there will be
painting and/or sleeving and/or a window. A window is unlikely, the rest depends
mostly on how much time I'll have in the next few weeks (this is not a long-term
project, aim is to have it done way before HELIOS).

Also, since costs for this build should not spiral out of control, I will be
trying to reuse as many scrap and spare parts I have laying around as possible.

Teaser

More pics will follow as parts arrive and the build progresses, for now a shot of the
case:

(click image for full res)
aw--apollo--2013-11-07--01--pp689.jpeg

That's all for now, thanks for stopping by, and so long.
smile.gif
See less See more
1 - 3 of 30 Posts
Very nice build, I love the HDD rack you did, made me think about making something similar for my server. Also I'm pretty sure that I'm gonna steal your chipset cooler idea, it's ridiculous how hot that chip gets.

I'm curious though about your choice to go Arch instead of something more enterprise focused like CentOS or Debian.
4
Quote:
Originally Posted by alpenwasser View Post

Yes, ridiculous indeed. I noticed that when I started playing with my SR-2, the fan on
that thing was very busy. Good thing I now have a watercooler on that chipset, but I
didn't really feel like going W/C for this bulid.
wink.gif
Yea, I wouldn't put it under water either
smile.gif
But it would certainly make it an interesting build
rolleyes.gif
Quote:
Originally Posted by alpenwasser View Post

Two reasons primarily, the first one being my familiarity with it. I know there are people
who tend to go "Ah, Arch, bleeding edge, unstable!" and all that. But in all honesty, I've
been using it as my daily driver on several machines for three years now, and I've had
only one case of actual proper system breakage, and that was related to Gnome. And
even then, I just did a clean reinstall and was back up and running within about two
hours with all my settings and stuff from before.

I know my way around Arch well enough to feel comfortable with it and be efficient-ish
when needing to troubleshoot, which I can't say for Debian (or FreeBSD, which I actually
also considered at some point and did play around with on another machine for a while),
or other distros (I could learn, of course, but at the moment I'm a bit pressed for time with
college and all, I need this thing up and running sooner rather than later).

Secondly, ZFS support is very good on Arch, whereas I've read a few posts around some
forums which said that ZFS under Debian-based distros is... hinky. I haven't personally
tried it, so I can't speak from personal experience on that one though. I have been using
ZFS on Arch on another machine for about nine months now and it's been working very
well, so I thought I'd deploy it on this machine too.
Knowing the way around things certainly is very important, so I get it
smile.gif


I never really delved into ZFS on linux, so it's interesting to hear that it's buggy on Debian, but it's not surprising tbh.

When considering ZFS I always thought that I'd simply go FreeNAS in a VM and be done with it. It looks very appealing to have it in a complete package with all the management in the fancy web interface, though I'm not sure how much of a use it would be really as I really don't mind doing things CLI, heck most of the time I prefer it. Crap writing this down, made me want to reconsider this again, good think I'm still quite away from migrating to ZFS
smile.gif
Quote:
Originally Posted by alpenwasser View Post

So basically, "If it ain't broke, don't fix it."
biggrin.gif
Absolutely!
See less See more
2
Quote:
Originally Posted by alpenwasser View Post

Funny you should mention that, I did actually build a w/c server/multimedia rig/boinc machine
last spring/summer.
biggrin.gif


http://www.alpenwasser.net/images/w800/aw--zeus--2013-06-23--02--complete-open.png

A summary of the build log can be found in this post.
Very nice, love the mod for the radiator
thumb.gif
Quote:
I can't really say too much about Debian, good or bad. I have a buddy who's been using
it extensively and is very happy with it, but the ZFS thing seems to have been a bit neglected
from what I've read.
I'm running some Debian VMs, and they are stable and run fine, so I can't complain, but I really don't like their slow update cycle, some packages on the stable channel are just too old. I know there's always unstable and testing, but this whole concept is just inconvenient tbh. I'm used to apt and as I said they work fine so I don't really have a reason to switch to something else, and as you said, if it ain't broke, don't fix it
smile.gif
Quote:
Yeah, I get that, sometimes having a comfy web interface is rather neat. But like you, I'm quite
fond of my CLI, and ZFS administration via the command line is actually pretty easy, the interface
isn't very complex, and what I've seen of it so far was pretty logical, although I'm definitely no
expert on ZFS (also, even if you don't use Arch, the Arch wiki article on ZFS is actually pretty good).
Yea I heard the same, also I guess it doesn't take much day to day maintenance so the web interface could really be unnecessary.

On Arch wiki, I really like it too, even though I'm not using Arch (I'm planning to, I just don't have the time to mess around nowadays), I found solutions to quite a few problems there.
Quote:
What I'd still like to try out is forcefully remove a drive from a pool and do a proper test run for
replacing that disk and rebuilding the array (or, resilvering the pool, as ZFS calls it), but I don't
have that possibility at the moment because I need all my pools online and can't risk any issues
right now.
I'd just throw in some old HDDs if you have some laying around, create a pool with the and mess around with that, or maybe some virtual hard drives if it works with those.
See less See more
1 - 3 of 30 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top