Overclock.net banner
21 - 30 of 30 Posts

· Registered
Joined
·
490 Posts
Quote:
Originally Posted by alpenwasser View Post

Yes, ridiculous indeed. I noticed that when I started playing with my SR-2, the fan on
that thing was very busy. Good thing I now have a watercooler on that chipset, but I
didn't really feel like going W/C for this bulid.
wink.gif
Yea, I wouldn't put it under water either
smile.gif
But it would certainly make it an interesting build
rolleyes.gif
Quote:
Originally Posted by alpenwasser View Post

Two reasons primarily, the first one being my familiarity with it. I know there are people
who tend to go "Ah, Arch, bleeding edge, unstable!" and all that. But in all honesty, I've
been using it as my daily driver on several machines for three years now, and I've had
only one case of actual proper system breakage, and that was related to Gnome. And
even then, I just did a clean reinstall and was back up and running within about two
hours with all my settings and stuff from before.

I know my way around Arch well enough to feel comfortable with it and be efficient-ish
when needing to troubleshoot, which I can't say for Debian (or FreeBSD, which I actually
also considered at some point and did play around with on another machine for a while),
or other distros (I could learn, of course, but at the moment I'm a bit pressed for time with
college and all, I need this thing up and running sooner rather than later).

Secondly, ZFS support is very good on Arch, whereas I've read a few posts around some
forums which said that ZFS under Debian-based distros is... hinky. I haven't personally
tried it, so I can't speak from personal experience on that one though. I have been using
ZFS on Arch on another machine for about nine months now and it's been working very
well, so I thought I'd deploy it on this machine too.
Knowing the way around things certainly is very important, so I get it
smile.gif


I never really delved into ZFS on linux, so it's interesting to hear that it's buggy on Debian, but it's not surprising tbh.

When considering ZFS I always thought that I'd simply go FreeNAS in a VM and be done with it. It looks very appealing to have it in a complete package with all the management in the fancy web interface, though I'm not sure how much of a use it would be really as I really don't mind doing things CLI, heck most of the time I prefer it. Crap writing this down, made me want to reconsider this again, good think I'm still quite away from migrating to ZFS
smile.gif
Quote:
Originally Posted by alpenwasser View Post

So basically, "If it ain't broke, don't fix it."
biggrin.gif
Absolutely!
 

· Registered
Joined
·
556 Posts
Discussion Starter · #22 ·
Quote:
Originally Posted by Aximous View Post

Yea, I wouldn't put it under water either
smile.gif
But it would certainly make it an interesting build
rolleyes.gif
Funny you should mention that, I did actually build a w/c server/multimedia rig/boinc machine
last spring/summer.
biggrin.gif


aw--zeus--2013-06-23--02--complete-open.png


A summary of the build log can be found in this post.

Quote:
Originally Posted by Aximous View Post

Knowing the way around things certainly is very important, so I get it
smile.gif
Yes indeed.
smile.gif

Quote:
Originally Posted by Aximous View Post

I never really delved into ZFS on linux, so it's interesting to hear that it's buggy on Debian, but it's not surprising tbh.
I can't really say too much about Debian, good or bad. I have a buddy who's been using
it extensively and is very happy with it, but the ZFS thing seems to have been a bit neglected
from what I've read.
Quote:
Originally Posted by Aximous View Post

When considering ZFS I always thought that I'd simply go FreeNAS in a VM and be done with it. It looks very appealing to have it in a complete package with all the management in the fancy web interface, though I'm not sure how much of a use it would be really as I really don't mind doing things CLI, heck most of the time I prefer it. Crap writing this down, made me want to reconsider this again, good think I'm still quite away from migrating to ZFS
smile.gif
Yeah, I get that, sometimes having a comfy web interface is rather neat. But like you, I'm quite
fond of my CLI, and ZFS administration via the command line is actually pretty easy, the interface
isn't very complex, and what I've seen of it so far was pretty logical, although I'm definitely no
expert on ZFS (also, even if you don't use Arch, the Arch wiki article on ZFS is actually pretty good).

What I'd still like to try out is forcefully remove a drive from a pool and do a proper test run for
replacing that disk and rebuilding the array (or, resilvering the pool, as ZFS calls it), but I don't
have that possibility at the moment because I need all my pools online and can't risk any issues
right now.
 

· Registered
Joined
·
490 Posts
Quote:
Originally Posted by alpenwasser View Post

Funny you should mention that, I did actually build a w/c server/multimedia rig/boinc machine
last spring/summer.
biggrin.gif


http://www.alpenwasser.net/images/w800/aw--zeus--2013-06-23--02--complete-open.png

A summary of the build log can be found in this post.
Very nice, love the mod for the radiator
thumb.gif
Quote:
I can't really say too much about Debian, good or bad. I have a buddy who's been using
it extensively and is very happy with it, but the ZFS thing seems to have been a bit neglected
from what I've read.
I'm running some Debian VMs, and they are stable and run fine, so I can't complain, but I really don't like their slow update cycle, some packages on the stable channel are just too old. I know there's always unstable and testing, but this whole concept is just inconvenient tbh. I'm used to apt and as I said they work fine so I don't really have a reason to switch to something else, and as you said, if it ain't broke, don't fix it
smile.gif
Quote:
Yeah, I get that, sometimes having a comfy web interface is rather neat. But like you, I'm quite
fond of my CLI, and ZFS administration via the command line is actually pretty easy, the interface
isn't very complex, and what I've seen of it so far was pretty logical, although I'm definitely no
expert on ZFS (also, even if you don't use Arch, the Arch wiki article on ZFS is actually pretty good).
Yea I heard the same, also I guess it doesn't take much day to day maintenance so the web interface could really be unnecessary.

On Arch wiki, I really like it too, even though I'm not using Arch (I'm planning to, I just don't have the time to mess around nowadays), I found solutions to quite a few problems there.
Quote:
What I'd still like to try out is forcefully remove a drive from a pool and do a proper test run for
replacing that disk and rebuilding the array (or, resilvering the pool, as ZFS calls it), but I don't
have that possibility at the moment because I need all my pools online and can't risk any issues
right now.
I'd just throw in some old HDDs if you have some laying around, create a pool with the and mess around with that, or maybe some virtual hard drives if it works with those.
 

· Registered
Joined
·
556 Posts
Discussion Starter · #24 ·
Quote:
Originally Posted by Aximous View Post

Very nice, love the mod for the radiator
thumb.gif
Thanks, I'm rather fond of the machine too.
smile.gif

Quote:
Originally Posted by Aximous View Post

I'm running some Debian VMs, and they are stable and run fine, so I can't complain, but I really don't like their slow update cycle, some packages on the stable channel are just too old. I know there's always unstable and testing, but this whole concept is just inconvenient tbh. I'm used to apt and as I said they work fine so I don't really have a reason to switch to something else, and as you said, if it ain't broke, don't fix it
smile.gif
I must say that the rolling release thing is one of the aspects I really like about
Arch. Not so much because of up-to-date packages (although that's nice too),
but because I just never had to bother with major release udpates and the hoopla
that can sometimes go with those.

There are updates which delve a bit deeper into the system, but not very often
(for example, when they introduced signed packages, or when they switched to
systemd), and in those cases, they had always prepared a very smooth update
path with clear and helpful instructions, so for me it was pretty much smooth
sailing even in those cases.
smile.gif

Quote:
Originally Posted by Aximous View Post

Yea I heard the same, also I guess it doesn't take much day to day maintenance so the web interface could really be unnecessary.
I suppose a proper web interface would have its upsides, but I'd say it should
have more in it than just ZFS admin, then it makes sense, but just for ZFS
administration it's a bit overkill since once you've created your storage pools
you rarely touch the ZFS tools anymore except to get some stats or give a
scrubbing instruction.
Quote:
Originally Posted by Aximous View Post

On Arch wiki, I really like it too, even though I'm not using Arch (I'm planning to, I just don't have the time to mess around nowadays), I found solutions to quite a few problems there.
I'd just throw in some old HDDs if you have some laying around, create a pool with the and mess around with that, or maybe some virtual hard drives if it works with those.
Yeah, I know that problem, there just isn't enough time to do everything I'd
like to do. As said above, I tinkered around with FreeBSD for a while, but in
the end I just didn't have the time to really get to know the system well enough
to feel comfortable to actually use it in production. Besides, FreeBSD is not
rolling release (although maybe I could use ArchBSD, but that's still a very
small and young project).

Originally I started out with Gentoo in 2004, then I took a break from Linux for
a few years when I was in the army and got back into it around 2007 with Ubuntu,
which I used until 2011 when I switched to Arch. I've been wanting to try Gentoo
again for a while now, but just haven't had the time.

Ah well, such is life.
wink.gif
 

· Registered
Joined
·
556 Posts
Discussion Starter · #25 ·
Storage and Networking Performance

Beware: This post will be of little interest to those
who are primarily in it for the physical side of
building. Instead, this update will be about the performance
and software side of things. So, lots of text, lots of
numbers.
biggrin.gif


These results are still somewhat preliminary since I'm not
yet 100% sure if the hardware config will remain like this
for an extended period of time (I really want to put another
12 GB of RAM in there, for example, and am considering
adding some SSD goodness to my ZFS pools), nor am I
necessarily done with tuning software parameters, but it
should give some idea of what performance I'm currently
getting.

As you may recall from my previous update, I'm running three
VMs on this machine, two of which are pretty much always on
(the media VM and my personal VM), and the third of which is
only active when I'm pulling a backup of my dad's work
machine (apollo-business).

NOTE: I know there's lots of text and stuff in my
screenshots and it may be a bit difficult to read. Click
on any image to get the full-res version for improved
legibility.
smile.gif


The storage setup has been revised somewhat since the last
update. I now have a mirrored ZFS pool in ZEUS for backing
up my dad's business data (so, in total his data is on six
HDDs, including the one in his work machine). His data is
pulled onto the apollo-business VM from his work machine,
and then pulled onto ZEUS. The fact that neither the
business VM nor ZEUS are online 24/7 (ZEUS is turned off
physically most of the time) should provide some decent
protection against most malheurs, the only thing I still
need to implement is a proper off-site backup plan (which
I will definitely do, in case of unforeseen disasters,
break-ins/theft and so on).

(click image for full res)
aw--apollo--2014-04-26--01--apollo-zeus-storage.png

The Plan

For convenience's sake, I was planning on using NFS for
sharing data between the server and its various clients
on our network. Unfortunately, I was getting some rather
disappointing benchmarking results initially, with only ~60
MB/s to ~70 MB/s transfer speeds between machines.

Tools

I'm not really a storage benchmarking expert, and at the
moment I definitely don't have the time to become one, so
for benchmarking my storage I've used dd for the time
being. It's easy to use and is pretty much standard for
every Linux install. I thought about using other storage
benchmarks like Bonnie++ and FIO, and at some point I might
still do that, but for the time being dd will suffice for my
purposes.

For those not familiar with this: /dev/zero basically
serves as a data source for lots of zeroes, /dev/null is a
sink into which you can write data without it being written
to disk. So, if you want to do writing benchmarks to your
storage, you can grab data from /dev/zero without needing to
worry about a bottleneck on your data source side, and
/dev/null is the equivalent when you wish to do reading
benchmarks. To demonstrate this, I did a quick test below
directly from /dev/zero into /dev/null.

Basically. It's a bit of a simplification, but I hope it's
somewhat understandable.
wink.gif


Baseline

Before doing storage benchmarks across the network, we
should of course get a baseline for both the storage setup
itself as well as the network.

The base pipe from /dev/zero into /dev/null transfers has a
transfer speed of ~9 GB/s. Nothing unexpected, but it's a
quick test to do and I was curious about this:

(click image for full res)
aw--apollo--2014-04-26--02--baseline--dev-zero-dev-null.png

For measuring this I used iperf, here's a screencap from one
of my test runs. The machine it's running on was my personal
VM.

Top to bottom:
- my dad's Windows 7 machine
- APOLLO host (Arch Linux)
- HELIOS (also Windows 7 for the time being, sadly)
- ZEUS (Arch Linux)
- My Laptop via WiFi (Arch Linux)
- APOLLO business VM (Arch Linux)
- APOLLO media VM

The bottom two results aren't really representative of
typical performance, usually it's ~920 Mbit/s to ~940
Mbit/s, But as with any setup, outliers happen.

(click image for full res)
2014-04-21--17-33-30--iperf.png

The networking performance is where I hit my first hickup.
I failed to specify to the VM which networking driver it was
supposed to use, and the default one does not exactly have
stellar performance. It was an easy fix though, and with the
new settings I now get pretty much the same networking
performance across all my machines (except the Windows ones,
those are stuck at ~500 Mbit/s for some reason as you can
see above, but that's not hugely important to me at the
moment TBH).

This is representative of what I can get most of the time:

(click image for full res)
aw--apollo--2014-04-26--03--baseline--network.png

I had a similar issue with the storage subsystem at first,
the default parameters for caching were not very conducive
to high performance and resulted in some pretty bad results:

(click image for full res)
aw--apollo--2014-04-26--04--baseline--cache-writethrough.png

Once I fixed that though, much better, and sufficient to
saturate a gigabit networking connection.

(click image for full res)
aw--apollo--2014-04-26--05--baseline--cache-none.png

Networking Benchmark Results

Initially, I got only around 60 MB/s for NFS, after that the
next plateau was somewhere between 75 MB/s and 80 MB/s, and
lastly, this is the current situation. I must say I find the
results to be slightly... peculiar. Pretty much everything
I've ever read says that NFS should offer better performance
than CIFS, and yet, for some reason, in many cases that was
not the result I got.

I'm not yet sure if I'll be going with NFS or CIFS in the
end to be honest. On one hand, CIFS does give my better
performance for the most part, but I have found NFS more
convenient to configure and use, and NFS' performance at
this point is decent enough for most of my purposes.

In general, I find the NFS results just rather weird
TBH. But they have been reproducible over different runs on
several days, so for the time being I'll accept them as what
I can get.

Anyway, behold the mother of all graphics!
biggrin.gif


(click image for full res)
aw--apollo--2014-04-26--06--network-benchmarks.png

FTP

As an alternative, I've also tried FTP , but results were
not really very satisfying. This is just a screenshot from
one test run, but it is representative of the various other
test runs I did:

(click image for full res)
2014-04-19--19-59-00--ftp.png

ZFS Compression
Also, for those curious about ZFS' compression (which was
usually disabled in the above tests because zeroes are very
compressible and would therefore skew the benchmarks), I did
a quick test to compare writing zeroes to a ZFS pool with
and without compression.

This is CPU utilization without compression (the grey bars
are CPU time spent waiting for I/O, not actual work the CPU
is doing):

(click image for full res)
2014-04-21--19-41-25--zfs-nocompression-zeroes.png

And this was the write speed for that specific test run:
(click image for full res)
2014-04-21--19-45-01--zfs-nocompression-zeros-transfer-speed.png

With lz4 compression enabled, the CPU does quite a bit more
work, as expected (though it still seems that you don't
really need a very powerful CPU to make use of this):

(click image for full res)
2014-04-21--19-39-59--zfs-lz4-zeroes.png

And the write speed goes up almost to a gigabyte per second,
pretty neat if you ask me.
biggrin.gif


(click image for full res)
2014-04-21--19-40-47--zfs-lz4-zeroes-transfer-speed.png

Side note: ZFS' lz4 compression is allegedly smart enough
not to try to compress incompressible data, such as media
files which are already compressed, which should prevent
such writes from being slowed down. Very nice IMHO.

That's it for today. What's still left to do at this point
is installing some sound-dampening materials (the rig is a
bit on the loud side, even despite being in its own room),
and possibly upgrading to more RAM, the rest will probably
stay like this for a while. If I really do upgrade to more
RAM, I'll adjust the VMs accordingly and run the tests
again, just to see if that really makes a difference. So far
I have been unable to get better performance from my ZFS
pools by allocating more RAM, or even running benches
directly on the host machine with the full 12 GB RAM and
eight cores/sixteen threads.

Cheers,
-aw
 

· Registered
Joined
·
159 Posts
I am very impressed with what you have got so far! I am actually going to be doing a server build here in the near future with an old Chieftec Dragon case. You may have mentioned this already and I just glanced over it, but what RAID/SAS card are you using in this build? I've been trying to find something that is semi-inexpensive to run for my build!

Again, looks great so far!
 

· Registered
Joined
·
556 Posts
Discussion Starter · #27 ·
Quote:
Originally Posted by waffles3680 View Post

I am very impressed with what you have got so far! I am actually going to be doing a server build here in the near future with an old Chieftec Dragon case. You may have mentioned this already and I just glanced over it, but what RAID/SAS card are you using in this build? I've been trying to find something that is semi-inexpensive to run for my build!

Again, looks great so far!
Thanks for the compliments, appreciate it!
smile.gif


The controller cards are LSI 9211-8i. You can get them on eBay new-in-box for ~100 USD
(retail price where I live is currently still ~350 USD, so I'd say that's a pretty good deal). If
you're doing a ZFS build, you'll probably want to flash them to IT mode (for which I have
a tutorial on another forum, might put it on this forum as well if needed). I've done that with
all three cards and they run flawlessly so far.

Alternatively, you could also look for IBM M1015 cards, which are actually LSI 9210-8i
(the 9210-8i is an OEM model that only was sold as such and not directly by LSI to end
consumers). Many people have had success crossflashing it to the 9211-8i firmware,
though I know of at least one person where that didn't work and he needed to go for
an older 9210-8i firmware.

Let me know if you have any more questions, I'll be happy to answer any I can.
smile.gif
 

· Registered
Joined
·
556 Posts
Discussion Starter · #28 ·
Sound Dampening, Final Pics

As mentioned previously, the 92 mm fans are rather noisy,
but I didn't want to replace them. For one thing, I do
actually need some powerful fans to move air from the HDD
compartment into the M/B compartment, on the other hand I
didn't feel like spending more money on expensive fans.

For this purpose, I ordered some AcoustiPack foam in various
thicknesses (12 mm, 7 mm and 4 mm) and lined parts of
the case with them. I wasn't quite sure how well they
would work, as my past experiences with acoustic dampening
materials weren't all that impressive, but to my surprise,
they're actually pretty damn effective.

I have also put in another 12 GB or RAM. I was lucky enough
to get six 2 GB sticks of the exact same RAM I already had
for 70 USD (plus shipping and fees, but still a pretty good
price IMHO) from eBay. 24 GB should easily suffice for my
purposes.

Lastly, I've repurposed the 2.5" drive cage from my Caselabs
SMH10; cleaner than the rather improvised mount from before.

For the time being, the build is now pretty much complete.

Cost Analysis

One of the original goals was to not have this become
ridiculously expensive. Uhm, yeah, you know how these things
usually go.
rolleyes.gif


Total system cost: ~5,000 USD
of which were HDDs: ~2,500 USD

My share of the total cost is ~42%, the remainder was on my
dad, which is pretty fair I think. In the long run, my share
will probably rise as I'll most likely be the one paying for
most future storage expansions (at the moment I've paid for
~54% of the storage cost, and ~31% of the remaining
components).

One thing to keep in mind though is that some of these costs
go back a while as not all HDDs were bought for this server
but have been migrated into it from other machines. So the
actual project costs were less by about 1,300 USD.

Overall I'm still pretty happy with the price/performance
ratio. There aren't really that many areas where I could
have saved a lot of money without also taking noticeably
hits in performance or features.

I could have gone with a single-socket motherboard, or a
dual socket one with fewer features (say, fewer onboard
SAS/SATA ports as I'm not using nearly all of the ones this
one has due to the 2 TB disk limit), but most of the
features this one has I wouldn't want to miss TBH (the four
LAN ports are very handy, and IPMI is just freaking
awesome). And let's be honest: A dual-socket board just
looks freaking awesome (OK, I'll concede that that's not the
best argument, bit still, it does!).
biggrin.gif


Other than that, I could have gone with some cheaper CPU
coolers as the 40 W CPUs (btw., core voltage is ~0.9 V
biggrin.gif
)
don't really require much in that area, but the rest is
pretty much what I want need for an acceptable price.

Anyway, enough blabbering:

Final Pics

So, some final pics (I finally managed to acquire our DSLR
for these):

(click image for full res)
aw--apollo--2014-05-10--01--acoustifoam-front.jpeg

(click image for full res)
aw--apollo--2014-05-10--02--acoustifoam-side-panel.jpeg

(click image for full res)
aw--apollo--2014-05-10--03--outside.jpeg

(click image for full res)
aw--apollo--2014-05-10--04--open.jpeg

(click image for full res)
aw--apollo--2014-05-10--05--open.jpeg

That Caselabs drive cage I mentioned. The top drive is the
WDC VelociRaptor.

(click image for full res)
aw--apollo--2014-05-10--06--2.5-inch-cage.jpeg

And some more cable shots, because why not.

(click image for full res)
aw--apollo--2014-05-10--07--cables.jpeg

(click image for full res)
aw--apollo--2014-05-10--08--cables.jpeg

(click image for full res)
aw--apollo--2014-05-10--09--cables.jpeg

Looks much better with all RAM slots filled IMHO.
biggrin.gif


(click image for full res)
aw--apollo--2014-05-10--10--cables-and-ram.jpeg

(click image for full res)
aw--apollo--2014-05-10--11--cables.jpeg

(click image for full res)
aw--apollo--2014-05-10--12--chipset-fan.jpeg

(click image for full res)
aw--apollo--2014-05-10--13--cpu-coolers.jpeg

(click image for full res)
aw--apollo--2014-05-10--14--ram.jpeg

(click image for full res)
aw--apollo--2014-05-10--15--back-side.jpeg

It's kinda funny: Considering how large the M/B compartment
actually is, it's pretty packed now with everything that's
in there. The impression is even stronger in person than on
the pics.

(click image for full res)
aw--apollo--2014-05-10--16--front-side.jpeg

Thanks for tagging along everyone, and until next time!
smile.gif
 

· Registered
Joined
·
476 Posts
Nice to see this all finished. For what you have in their it is actually quite tidy, and I like how their is still plenty of space for a few more drives. How quiet is this thing? because with those fans and that insulation it will barley sound like it is even on.

Now to finish your other project lol.
 

· Registered
Joined
·
556 Posts
Discussion Starter · #30 ·
Quote:
Originally Posted by Jakewat View Post

Nice to see this all finished. For what you have in their it is actually quite tidy, and I like how their is still plenty of space for a few more drives. How quiet is this thing? because with those fans and that insulation it will barley sound like it is even on.

Now to finish your other project lol.
Thanks!
smile.gif


Yes, having room for additional HDDs was part of the concept, not needing
to buy another machine when we need more storage. That's also one of the
reasons I've already bought all necessary controllers (aside from the storage
topology and reducing single points of failure etc.).

Quiet isn't really the word I'd use to describe it to be honest. It's not very loud
(anymore), but it's still too noisy to have under the table in an office (at least
for my tastes). But, when I close the door to the room in which it is placed it
is no longer audible with the foam lining, and I can work in the room (it's our
apartment workshop) without being bothered by its noise.

Also, maybe even more importantly, the foam lining radically changes the
characteristics of its sound. The whining of the fans themselves is much
less noticeable, and instead what you primarily hear is the air moving through
the inlets/outlets of the case, so the sound's quality is much less annoying.

I'll see if I can make a vid about it at some point to illustrate this a bit better.

And yeah, now on to HELIOS.
biggrin.gif
 
21 - 30 of 30 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top