Overclock.net banner

21 - 30 of 30 Posts

·
Registered
Joined
·
490 Posts
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>alpenwasser</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Yes, ridiculous indeed. I noticed that when I started playing with my SR-2, the fan on<br>
that thing was very busy. Good thing I now have a watercooler on that chipset, but I<br>
didn't really feel like going W/C for this bulid. <img alt="wink.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/wink.gif"></div>
</div>
Yea, I wouldn't put it under water either <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"> But it would certainly make it an interesting build <img alt="rolleyes.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/rolleyes.gif"><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>alpenwasser</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Two reasons primarily, the first one being my familiarity with it. I know there are people<br>
who tend to go "Ah, Arch, bleeding edge, unstable!" and all that. But in all honesty, I've<br>
been using it as my daily driver on several machines for three years now, and I've had<br>
only one case of actual proper system breakage, and that was related to Gnome. And<br>
even then, I just did a clean reinstall and was back up and running within about two<br>
hours with all my settings and stuff from before.<br><br>
I know my way around Arch well enough to feel comfortable with it and be efficient-ish<br>
when needing to troubleshoot, which I can't say for Debian (or FreeBSD, which I actually<br>
also considered at some point and did play around with on another machine for a while),<br>
or other distros (I could learn, of course, but at the moment I'm a bit pressed for time with<br>
college and all, I need this thing up and running sooner rather than later).<br><br>
Secondly, ZFS support is very good on Arch, whereas I've read a few posts around some<br>
forums which said that ZFS under Debian-based distros is... hinky. I haven't personally<br>
tried it, so I can't speak from personal experience on that one though. I have been using<br>
ZFS on Arch on another machine for about nine months now and it's been working very<br>
well, so I thought I'd deploy it on this machine too.</div>
</div>
Knowing the way around things certainly is very important, so I get it <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><br>
I never really delved into ZFS on linux, so it's interesting to hear that it's buggy on Debian, but it's not surprising tbh.<br><br>
When considering ZFS I always thought that I'd simply go FreeNAS in a VM and be done with it. It looks very appealing to have it in a complete package with all the management in the fancy web interface, though I'm not sure how much of a use it would be really as I really don't mind doing things CLI, heck most of the time I prefer it. Crap writing this down, made me want to reconsider this again, good think I'm still quite away from migrating to ZFS <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>alpenwasser</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22104962"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
So basically, "If it ain't broke, don't fix it." <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"></div>
</div>
Absolutely!
 

·
Registered
Joined
·
557 Posts
Discussion Starter #22
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Yea, I wouldn't put it under water either <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"> But it would certainly make it an interesting build <img alt="rolleyes.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/rolleyes.gif"></div>
</div>
<br>
Funny you should mention that, I did actually build a w/c server/multimedia rig/boinc machine<br>
last spring/summer. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br><img alt="aw--zeus--2013-06-23--02--complete-open.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--zeus--2013-06-23--02--complete-open.png"><br><br>
A summary of the build log can be found in <a class="bbcode_url" href="http://www.overclock.net/t/1405988/build-log-helios-caselabs-smh10-black-copper-evga-sr-2-geforce-titan-copper-tubes/0_100#post_20501187">this post</a>.<br><br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Knowing the way around things certainly is very important, so I get it <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"></div>
</div>
<br>
Yes indeed. <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
I never really delved into ZFS on linux, so it's interesting to hear that it's buggy on Debian, but it's not surprising tbh.</div>
</div>
<br>
I can't really say too much about Debian, good or bad. I have a buddy who's been using<br>
it extensively and is very happy with it, but the ZFS thing seems to have been a bit neglected<br>
from what I've read.<br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22105059"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
When considering ZFS I always thought that I'd simply go FreeNAS in a VM and be done with it. It looks very appealing to have it in a complete package with all the management in the fancy web interface, though I'm not sure how much of a use it would be really as I really don't mind doing things CLI, heck most of the time I prefer it. Crap writing this down, made me want to reconsider this again, good think I'm still quite away from migrating to ZFS <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"></div>
</div>
<br>
Yeah, I get that, sometimes having a comfy web interface is rather neat. But like you, I'm quite<br>
fond of my CLI, and ZFS administration via the command line is actually pretty easy, the interface<br>
isn't very complex, and what I've seen of it so far was pretty logical, although I'm definitely no<br>
expert on ZFS (also, even if you don't use Arch, the Arch wiki article on ZFS is actually pretty good).<br><br>
What I'd still like to try out is forcefully remove a drive from a pool and do a proper test run for<br>
replacing that disk and rebuilding the array (or, resilvering the pool, as ZFS calls it), but I don't<br>
have that possibility at the moment because I need all my pools online and can't risk any issues<br>
right now.
 

·
Registered
Joined
·
490 Posts
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22105855" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>alpenwasser</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_30#post_22105855"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Funny you should mention that, I did actually build a w/c server/multimedia rig/boinc machine<br>
last spring/summer. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br>
<a href="http://www.alpenwasser.net/images/w800/aw--zeus--2013-06-23--02--complete-open.png" target="_blank">http://www.alpenwasser.net/images/w800/aw--zeus--2013-06-23--02--complete-open.png</a><br><br>
A summary of the build log can be found in <a class="bbcode_url" href="http://www.overclock.net/t/1405988/build-log-helios-caselabs-smh10-black-copper-evga-sr-2-geforce-titan-copper-tubes/0_100#post_20501187">this post</a>.</div>
</div>
Very nice, love the mod for the radiator <img alt="thumb.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/thumb.gif"><div class="quote-container"><span>Quote:</span>
<div class="quote-block">I can't really say too much about Debian, good or bad. I have a buddy who's been using<br>
it extensively and is very happy with it, but the ZFS thing seems to have been a bit neglected<br>
from what I've read.</div>
</div>
I'm running some Debian VMs, and they are stable and run fine, so I can't complain, but I really don't like their slow update cycle, some packages on the stable channel are just too old. I know there's always unstable and testing, but this whole concept is just inconvenient tbh. I'm used to apt and as I said they work fine so I don't really have a reason to switch to something else, and as you said, if it ain't broke, don't fix it <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><div class="quote-container"><span>Quote:</span>
<div class="quote-block">Yeah, I get that, sometimes having a comfy web interface is rather neat. But like you, I'm quite<br>
fond of my CLI, and ZFS administration via the command line is actually pretty easy, the interface<br>
isn't very complex, and what I've seen of it so far was pretty logical, although I'm definitely no<br>
expert on ZFS (also, even if you don't use Arch, the Arch wiki article on ZFS is actually pretty good).</div>
</div>
Yea I heard the same, also I guess it doesn't take much day to day maintenance so the web interface could really be unnecessary.<br><br>
On Arch wiki, I really like it too, even though I'm not using Arch (I'm planning to, I just don't have the time to mess around nowadays), I found solutions to quite a few problems there.<br><div class="quote-container"><span>Quote:</span>
<div class="quote-block">What I'd still like to try out is forcefully remove a drive from a pool and do a proper test run for<br>
replacing that disk and rebuilding the array (or, resilvering the pool, as ZFS calls it), but I don't<br>
have that possibility at the moment because I need all my pools online and can't risk any issues<br>
right now.</div>
</div>
<br>
I'd just throw in some old HDDs if you have some laying around, create a pool with the and mess around with that, or maybe some virtual hard drives if it works with those.
 

·
Registered
Joined
·
557 Posts
Discussion Starter #24
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Very nice, love the mod for the radiator <img alt="thumb.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/thumb.gif"></div>
</div>
<br>
Thanks, I'm rather fond of the machine too. <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
I'm running some Debian VMs, and they are stable and run fine, so I can't complain, but I really don't like their slow update cycle, some packages on the stable channel are just too old. I know there's always unstable and testing, but this whole concept is just inconvenient tbh. I'm used to apt and as I said they work fine so I don't really have a reason to switch to something else, and as you said, if it ain't broke, don't fix it <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"></div>
</div>
<br>
I must say that the rolling release thing is one of the aspects I really like about<br>
Arch. Not so much because of up-to-date packages (although that's nice too),<br>
but because I just never had to bother with major release udpates and the hoopla<br>
that can sometimes go with those.<br><br>
There are updates which delve a bit deeper into the system, but not very often<br>
(for example, when they introduced signed packages, or when they switched to<br>
systemd), and in those cases, they had always prepared a very smooth update<br>
path with clear and helpful instructions, so for me it was pretty much smooth<br>
sailing even in those cases. <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Yea I heard the same, also I guess it doesn't take much day to day maintenance so the web interface could really be unnecessary.</div>
</div>
<br>
I suppose a proper web interface would have its upsides, but I'd say it should<br>
have more in it than just ZFS admin, then it makes sense, but just for ZFS<br>
administration it's a bit overkill since once you've created your storage pools<br>
you rarely touch the ZFS tools anymore except to get some stats or give a<br>
scrubbing instruction.<br><div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Aximous</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22106441"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
On Arch wiki, I really like it too, even though I'm not using Arch (I'm planning to, I just don't have the time to mess around nowadays), I found solutions to quite a few problems there.<br>
I'd just throw in some old HDDs if you have some laying around, create a pool with the and mess around with that, or maybe some virtual hard drives if it works with those.</div>
</div>
<br>
Yeah, I know that problem, there just isn't enough time to do everything I'd<br>
like to do. As said above, I tinkered around with FreeBSD for a while, but in<br>
the end I just didn't have the time to really get to know the system well enough<br>
to feel comfortable to actually use it in production. Besides, FreeBSD is not<br>
rolling release (although maybe I could use ArchBSD, but that's still a very<br>
small and young project).<br><br>
Originally I started out with Gentoo in 2004, then I took a break from Linux for<br>
a few years when I was in the army and got back into it around 2007 with Ubuntu,<br>
which I used until 2011 when I switched to Arch. I've been wanting to try Gentoo<br>
again for a while now, but just haven't had the time.<br><br>
Ah well, such is life. <img alt="wink.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/wink.gif">
 

·
Registered
Joined
·
557 Posts
Discussion Starter #25
<b><span style="font-size:24px;">Storage and Networking Performance</span></b><br><br><br><br>
Beware: This post will be of little interest to those<br>
who are primarily in it for the physical side of<br>
building. Instead, this update will be about the performance<br>
and software side of things. So, lots of text, lots of<br>
numbers. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br>
These results are still somewhat preliminary since I'm not<br>
yet 100% sure if the hardware config will remain like this<br>
for an extended period of time (I really want to put another<br>
12 GB of RAM in there, for example, and am considering<br>
adding some SSD goodness to my ZFS pools), nor am I<br>
necessarily done with tuning software parameters, but it<br>
should give some idea of what performance I'm currently<br>
getting.<br><br>
As you may recall from my previous update, I'm running three<br>
VMs on this machine, two of which are pretty much always on<br>
(the media VM and my personal VM), and the third of which is<br>
only active when I'm pulling a backup of my dad's work<br>
machine (apollo-business).<br><br><br><br><b>NOTE</b>: I know there's lots of text and stuff in my<br>
screenshots and it may be a bit difficult to read. Click<br>
on any image to get the full-res version for improved<br>
legibility. <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><br><br>
The storage setup has been revised somewhat since the last<br>
update. I now have a mirrored ZFS pool in ZEUS for backing<br>
up my dad's business data (so, in total his data is on six<br>
HDDs, including the one in his work machine). His data is<br>
pulled onto the apollo-business VM from his work machine,<br>
and then pulled onto ZEUS. The fact that neither the<br>
business VM nor ZEUS are online 24/7 (ZEUS is turned off<br>
physically most of the time) should provide some decent<br>
protection against most malheurs, the only thing I still<br>
need to implement is a proper off-site backup plan (which<br>
I will definitely do, in case of unforeseen disasters,<br>
break-ins/theft and so on).<br><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--01--apollo-zeus-storage.png" target="_blank"><img alt="aw--apollo--2014-04-26--01--apollo-zeus-storage.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--01--apollo-zeus-storage.png"></a><br><br><br><b><span style="font-size:16px;">The Plan</span></b><br><br>
For convenience's sake, I was planning on using NFS for<br>
sharing data between the server and its various clients<br>
on our network. Unfortunately, I was getting some rather<br>
disappointing benchmarking results initially, with only ~60<br>
MB/s to ~70 MB/s transfer speeds between machines.<br><br><br><b><span style="font-size:16px;">Tools</span></b><br><br>
I'm not really a storage benchmarking expert, and at the<br>
moment I definitely don't have the time to become one, so<br>
for benchmarking my storage I've used dd for the time<br>
being. It's easy to use and is pretty much standard for<br>
every Linux install. I thought about using other storage<br>
benchmarks like Bonnie++ and FIO, and at some point I might<br>
still do that, but for the time being dd will suffice for my<br>
purposes.<br><br>
For those not familiar with this: /dev/zero basically<br>
serves as a data source for lots of zeroes, /dev/null is a<br>
sink into which you can write data without it being written<br>
to disk. So, if you want to do writing benchmarks to your<br>
storage, you can grab data from /dev/zero without needing to<br>
worry about a bottleneck on your data source side, and<br>
/dev/null is the equivalent when you wish to do reading<br>
benchmarks. To demonstrate this, I did a quick test below<br>
directly from /dev/zero into /dev/null.<br><br>
Basically. It's a bit of a simplification, but I hope it's<br>
somewhat understandable. <img alt="wink.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/wink.gif"><br><br><br><b><span style="font-size:16px;">Baseline</span></b><br><br><br>
Before doing storage benchmarks across the network, we<br>
should of course get a baseline for both the storage setup<br>
itself as well as the network.<br><br>
The base pipe from /dev/zero into /dev/null transfers has a<br>
transfer speed of ~9 GB/s. Nothing unexpected, but it's a<br>
quick test to do and I was curious about this:<br><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--02--baseline--dev-zero-dev-null.png" target="_blank"><img alt="aw--apollo--2014-04-26--02--baseline--dev-zero-dev-null.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--02--baseline--dev-zero-dev-null.png"></a><br><br><br>
For measuring this I used iperf, here's a screencap from one<br>
of my test runs. The machine it's running on was my personal<br>
VM.<br><br>
Top to bottom:<br>
- my dad's Windows 7 machine<br>
- APOLLO host (Arch Linux)<br>
- HELIOS (also Windows 7 for the time being, sadly)<br>
- ZEUS (Arch Linux)<br>
- My Laptop via WiFi (Arch Linux)<br>
- APOLLO business VM (Arch Linux)<br>
- APOLLO media VM<br><br>
The bottom two results aren't really representative of<br>
typical performance, usually it's ~920 Mbit/s to ~940<br>
Mbit/s, But as with any setup, outliers happen.<br><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-21--17-33-30--iperf.png" target="_blank"><img alt="2014-04-21--17-33-30--iperf.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/2014-04-21--17-33-30--iperf.png"></a><br><br><br>
The networking performance is where I hit my first hickup.<br>
I failed to specify to the VM which networking driver it was<br>
supposed to use, and the default one does not exactly have<br>
stellar performance. It was an easy fix though, and with the<br>
new settings I now get pretty much the same networking<br>
performance across all my machines (except the Windows ones,<br>
those are stuck at ~500 Mbit/s for some reason as you can<br>
see above, but that's not hugely important to me at the<br>
moment TBH).<br><br>
This is representative of what I can get most of the time:<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--03--baseline--network.png" target="_blank"><img alt="aw--apollo--2014-04-26--03--baseline--network.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--03--baseline--network.png"></a><br><br><br>
I had a similar issue with the storage subsystem at first,<br>
the default parameters for caching were not very conducive<br>
to high performance and resulted in some pretty bad results:<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--04--baseline--cache-writethrough.png" target="_blank"><img alt="aw--apollo--2014-04-26--04--baseline--cache-writethrough.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--04--baseline--cache-writethrough.png"></a><br><br><br>
Once I fixed that though, much better, and sufficient to<br>
saturate a gigabit networking connection.<br><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--05--baseline--cache-none.png" target="_blank"><img alt="aw--apollo--2014-04-26--05--baseline--cache-none.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--05--baseline--cache-none.png"></a><br><br><br><br><b><span style="font-size:16px;">Networking Benchmark Results</span></b><br><br>
Initially, I got only around 60 MB/s for NFS, after that the<br>
next plateau was somewhere between 75 MB/s and 80 MB/s, and<br>
lastly, this is the current situation. I must say I find the<br>
results to be slightly... peculiar. Pretty much everything<br>
I've ever read says that NFS should offer better performance<br>
than CIFS, and yet, for some reason, in many cases that was<br>
not the result I got.<br><br>
I'm not yet sure if I'll be going with NFS or CIFS in the<br>
end to be honest. On one hand, CIFS does give my better<br>
performance for the most part, but I have found NFS more<br>
convenient to configure and use, and NFS' performance at<br>
this point is decent enough for most of my purposes.<br><br>
In general, I find the NFS results just rather weird<br>
TBH. But they have been reproducible over different runs on<br>
several days, so for the time being I'll accept them as what<br>
I can get.<br><br><br>
Anyway, behold the mother of all graphics! <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-04-26--06--network-benchmarks.png" target="_blank"><img alt="aw--apollo--2014-04-26--06--network-benchmarks.png" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-04-26--06--network-benchmarks.png"></a><br><br><br><b><span style="font-size:16px;">FTP</span></b><br><br>
As an alternative, I've also tried FTP , but results were<br>
not really very satisfying. This is just a screenshot from<br>
one test run, but it is representative of the various other<br>
test runs I did:<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-19--19-59-00--ftp.png" target="_blank"><img alt="2014-04-19--19-59-00--ftp.png" class="bbcode_img" src="http://www.alpenwasser.net/images/2014-04-19--19-59-00--ftp.png"></a><br><br><br><b><span style="font-size:16px;">ZFS Compression</span></b><br>
Also, for those curious about ZFS' compression (which was<br>
usually disabled in the above tests because zeroes are very<br>
compressible and would therefore skew the benchmarks), I did<br>
a quick test to compare writing zeroes to a ZFS pool with<br>
and without compression.<br><br>
This is CPU utilization without compression (the grey bars<br>
are CPU time spent waiting for I/O, not actual work the CPU<br>
is doing):<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-21--19-41-25--zfs-nocompression-zeroes.png" target="_blank"><img alt="2014-04-21--19-41-25--zfs-nocompression-zeroes.png" class="bbcode_img" src="http://www.alpenwasser.net/images/2014-04-21--19-41-25--zfs-nocompression-zeroes.png"></a><br><br>
And this was the write speed for that specific test run:<br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-21--19-45-01--zfs-nocompression-zeros-transfer-speed.png" target="_blank"><img alt="2014-04-21--19-45-01--zfs-nocompression-zeros-transfer-speed.png" class="bbcode_img" src="http://www.alpenwasser.net/images/2014-04-21--19-45-01--zfs-nocompression-zeros-transfer-speed.png"></a><br><br><br>
With lz4 compression enabled, the CPU does quite a bit more<br>
work, as expected (though it still seems that you don't<br>
really need a very powerful CPU to make use of this):<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-21--19-39-59--zfs-lz4-zeroes.png" target="_blank"><img alt="2014-04-21--19-39-59--zfs-lz4-zeroes.png" class="bbcode_img" src="http://www.alpenwasser.net/images/2014-04-21--19-39-59--zfs-lz4-zeroes.png"></a><br><br><br>
And the write speed goes up almost to a gigabyte per second,<br>
pretty neat if you ask me. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/2014-04-21--19-40-47--zfs-lz4-zeroes-transfer-speed.png" target="_blank"><img alt="2014-04-21--19-40-47--zfs-lz4-zeroes-transfer-speed.png" class="bbcode_img" src="http://www.alpenwasser.net/images/2014-04-21--19-40-47--zfs-lz4-zeroes-transfer-speed.png"></a><br><br><br>
Side note: ZFS' lz4 compression is allegedly smart enough<br>
not to try to compress incompressible data, such as media<br>
files which are already compressed, which should prevent<br>
such writes from being slowed down. Very nice IMHO.<br><br><br><br>
That's it for today. What's still left to do at this point<br>
is installing some sound-dampening materials (the rig is a<br>
bit on the loud side, even despite being in its own room),<br>
and possibly upgrading to more RAM, the rest will probably<br>
stay like this for a while. If I really do upgrade to more<br>
RAM, I'll adjust the VMs accordingly and run the tests<br>
again, just to see if that really makes a difference. So far<br>
I have been unable to get better performance from my ZFS<br>
pools by allocating more RAM, or even running benches<br>
directly on the host machine with the full 12 GB RAM and<br>
eight cores/sixteen threads.<br><br><br>
Cheers,<br>
-aw
 

·
Registered
Joined
·
150 Posts
I am very impressed with what you have got so far! I am actually going to be doing a server build here in the near future with an old Chieftec Dragon case. You may have mentioned this already and I just glanced over it, but what RAID/SAS card are you using in this build? I've been trying to find something that is semi-inexpensive to run for my build!<br><br>
Again, looks great so far!
 

·
Registered
Joined
·
557 Posts
Discussion Starter #27
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22184869" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>waffles3680</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22184869"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
I am very impressed with what you have got so far! I am actually going to be doing a server build here in the near future with an old Chieftec Dragon case. You may have mentioned this already and I just glanced over it, but what RAID/SAS card are you using in this build? I've been trying to find something that is semi-inexpensive to run for my build!<br><br>
Again, looks great so far!</div>
</div>
<br><br>
Thanks for the compliments, appreciate it! <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><br><br>
The controller cards are LSI 9211-8i. You can get them on eBay new-in-box for ~100 USD<br>
(retail price where I live is currently still ~350 USD, so I'd say that's a pretty good deal). If<br>
you're doing a ZFS build, you'll probably want to flash them to IT mode (for which I have<br>
a tutorial on another forum, might put it on this forum as well if needed). I've done that with<br>
all three cards and they run flawlessly so far.<br><br>
Alternatively, you could also look for IBM M1015 cards, which are actually LSI 9210-8i<br>
(the 9210-8i is an OEM model that only was sold as such and not directly by LSI to end<br>
consumers). Many people have had success crossflashing it to the 9211-8i firmware,<br>
though I know of at least one person where that didn't work and he needed to go for<br>
an older 9210-8i firmware.<br><br><br>
Let me know if you have any more questions, I'll be happy to answer any I can. <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif">
 

·
Registered
Joined
·
557 Posts
Discussion Starter #28
<b><span style="font-size:24px;">Sound Dampening, Final Pics</span></b><br><br><br>
As mentioned previously, the 92 mm fans are rather noisy,<br>
but I didn't want to replace them. For one thing, I do<br>
actually need some powerful fans to move air from the HDD<br>
compartment into the M/B compartment, on the other hand I<br>
didn't feel like spending more money on expensive fans.<br><br>
For this purpose, I ordered some AcoustiPack foam in various<br>
thicknesses (12 mm, 7 mm and 4 mm) and lined parts of<br>
the case with them. I wasn't quite sure how well they<br>
would work, as my past experiences with acoustic dampening<br>
materials weren't all that impressive, but to my surprise,<br>
they're actually pretty damn effective.<br><br>
I have also put in another 12 GB or RAM. I was lucky enough<br>
to get six 2 GB sticks of the exact same RAM I already had<br>
for 70 USD (plus shipping and fees, but still a pretty good<br>
price IMHO) from eBay. 24 GB should easily suffice for my<br>
purposes.<br><br><br>
Lastly, I've repurposed the 2.5" drive cage from my Caselabs<br>
SMH10; cleaner than the rather improvised mount from before.<br><br><br><br>
For the time being, the build is now pretty much complete.<br><br><br><b><span style="font-size:16px;">Cost Analysis</span></b><br><br>
One of the original goals was to not have this become<br>
ridiculously expensive. Uhm, yeah, you know how these things<br>
usually go. <img alt="rolleyes.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/rolleyes.gif"><br><br>
Total system cost: ~5,000 USD<br>
of which were HDDs: ~2,500 USD<br><br>
My share of the total cost is ~42%, the remainder was on my<br>
dad, which is pretty fair I think. In the long run, my share<br>
will probably rise as I'll most likely be the one paying for<br>
most future storage expansions (at the moment I've paid for<br>
~54% of the storage cost, and ~31% of the remaining<br>
components).<br><br>
One thing to keep in mind though is that some of these costs<br>
go back a while as not all HDDs were bought for this server<br>
but have been migrated into it from other machines. So the<br>
actual project costs were less by about 1,300 USD.<br><br>
Overall I'm still pretty happy with the price/performance<br>
ratio. There aren't really that many areas where I could<br>
have saved a lot of money without also taking noticeably<br>
hits in performance or features.<br><br>
I could have gone with a single-socket motherboard, or a<br>
dual socket one with fewer features (say, fewer onboard<br>
SAS/SATA ports as I'm not using nearly all of the ones this<br>
one has due to the 2 TB disk limit), but most of the<br>
features this one has I wouldn't want to miss TBH (the four<br>
LAN ports are very handy, and IPMI is just freaking<br>
awesome). And let's be honest: A dual-socket board just<br>
looks freaking awesome (OK, I'll concede that that's not the<br>
best argument, bit still, it does!). <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br>
Other than that, I could have gone with some cheaper CPU<br>
coolers as the 40 W CPUs (btw., <b>core voltage is ~0.9 V</b> <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif">)<br>
don't really require much in that area, but the rest is<br>
pretty much what I <span style="text-decoration:line-through;">want</span> need for an acceptable price.<br><br><br>
Anyway, enough blabbering:<br><br><br><b><span style="font-size:16px;">Final Pics</span></b><br><br>
So, some final pics (I finally managed to acquire our DSLR<br>
for these):<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--01--acoustifoam-front.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--01--acoustifoam-front.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--01--acoustifoam-front.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--02--acoustifoam-side-panel.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--02--acoustifoam-side-panel.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--02--acoustifoam-side-panel.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--03--outside.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--03--outside.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--03--outside.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--04--open.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--04--open.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--04--open.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--05--open.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--05--open.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--05--open.jpeg"></a><br><br><br>
That Caselabs drive cage I mentioned. The top drive is the<br>
WDC VelociRaptor.<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--06--2.5-inch-cage.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--06--2.5-inch-cage.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--06--2.5-inch-cage.jpeg"></a><br><br><br>
And some more cable shots, because why not.<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--07--cables.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--07--cables.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--07--cables.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--08--cables.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--08--cables.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--08--cables.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--09--cables.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--09--cables.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--09--cables.jpeg"></a><br><br><br>
Looks much better with all RAM slots filled IMHO. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif"><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--10--cables-and-ram.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--10--cables-and-ram.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--10--cables-and-ram.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--11--cables.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--11--cables.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--11--cables.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--12--chipset-fan.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--12--chipset-fan.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--12--chipset-fan.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--13--cpu-coolers.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--13--cpu-coolers.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--13--cpu-coolers.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--14--ram.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--14--ram.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--14--ram.jpeg"></a><br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--15--back-side.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--15--back-side.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--15--back-side.jpeg"></a><br><br><br>
It's kinda funny: Considering how large the M/B compartment<br>
actually is, it's pretty packed now with everything that's<br>
in there. The impression is even stronger in person than on<br>
the pics.<br><br><span style="font-size:10px;"><span style="color:#915645;">(click image for full res)</span></span><br><a class="bbcode_url" href="http://www.alpenwasser.net/images/aw--apollo--2014-05-10--16--front-side.jpeg" target="_blank"><img alt="aw--apollo--2014-05-10--16--front-side.jpeg" class="bbcode_img" src="http://www.alpenwasser.net/images/w800/aw--apollo--2014-05-10--16--front-side.jpeg"></a><br><br><br><br>
Thanks for tagging along everyone, and until next time! <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif">
 
  • Rep+
Reactions: Ultra-m-a-n

·
Registered
Joined
·
450 Posts
Nice to see this all finished. For what you have in their it is actually quite tidy, and I like how their is still plenty of space for a few more drives. How quiet is this thing? because with those fans and that insulation it will barley sound like it is even on.<br><br>
Now to finish your other project lol.
 

·
Registered
Joined
·
557 Posts
Discussion Starter #30
<div class="quote-container" data-huddler-embed="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22246365" data-huddler-embed-placeholder="false"><span>Quote:</span>
<div class="quote-block">Originally Posted by <strong>Jakewat</strong> <a href="/t/1442386/apollo-2cpu-lga1366-server-inwin-pp689-24-disks-capacity-by-alpenwasser/0_50#post_22246365"><img alt="View Post" class="inlineimg" src="/img/forum/go_quote.gif"></a><br><br>
Nice to see this all finished. For what you have in their it is actually quite tidy, and I like how their is still plenty of space for a few more drives. How quiet is this thing? because with those fans and that insulation it will barley sound like it is even on.<br><br>
Now to finish your other project lol.</div>
</div>
<br>
Thanks! <img alt="smile.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/smile.gif"><br><br>
Yes, having room for additional HDDs was part of the concept, not needing<br>
to buy another machine when we need more storage. That's also one of the<br>
reasons I've already bought all necessary controllers (aside from the storage<br>
topology and reducing single points of failure etc.).<br><br>
Quiet isn't really the word I'd use to describe it to be honest. It's not very loud<br>
(anymore), but it's still too noisy to have under the table in an office (at least<br>
for my tastes). But, when I close the door to the room in which it is placed it<br>
is no longer audible with the foam lining, and I can work in the room (it's our<br>
apartment workshop) without being bothered by its noise.<br><br>
Also, maybe even more importantly, the foam lining radically changes the<br>
characteristics of its sound. The whining of the fans themselves is much<br>
less noticeable, and instead what you primarily hear is the air moving through<br>
the inlets/outlets of the case, so the sound's quality is much less annoying.<br><br>
I'll see if I can make a vid about it at some point to illustrate this a bit better.<br><br><br>
And yeah, now on to HELIOS. <img alt="biggrin.gif" class="bbcode_smiley" src="http://files.overclock.net/images/smilies/biggrin.gif">
 
21 - 30 of 30 Posts
Top