[Project Log] Server Cluster | SAN and Virtualization Hosts | HA and Redundancy - Overclock.net

Forum Jump: 
Reply
 
Thread Tools
post #1 of 15 Old 09-05-2013, 09:06 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Hey guys!

Since I need to build a new test server cluster for our company's virtualization farm, I decided to share some details and make a project/build log. Some of you will maybe find this interesting smile.gif

Objective:
Build a HA virtualization cluster and a SAN on the cheap to add to our existing infrastructure. If it works, passes testing and performance requirements, I will migrate our existing servers to this cluster, and add more power and more hosts; but that comes later.

So when I say on the cheap, I mean that the whole cluster with all the servers will cost less then some of the machines here. I will not use any brand servers or equipment, the whole cluster will be assembled from parts by hand.

What the cluster consists of:
  1. Actual server cluter:
    • 2x Virtualization host servers
  2. SAN (Storage Area Network):
    • 1x storage server
    • 1x storage backup


For actual posts, follow these links:
1. Part-List
2. Boxes and Parts
3. To 10 GBE or not to 10 GBE.
4. PSUs and rack rails.
5. Delays
6. Building vHost 1.
7. Networking and The Rack.
8. Building the rest - coming soon

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
Sponsored Links
Advertisement
 
post #2 of 15 Old 09-05-2013, 09:11 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Part-List:

Virtualization hosts:
MB: Supermicro X9DRL-3F (link)
CPU: Intel Xeon E5-2620 (link)
RAM: 32 GB DDR3-1600 ECC Reg CL11
Additional Intel Quad-Port Gigabit NIC

Chasis:
Supermicro 813MTQ-600CB 1U (link)
600w non-redundant PSU


Main Storage:
MB: Supermicro X9SCA-F (link)
CPU: Intel Xeon E3-1225v2 (link)
RAM: 16 GB DDR3-1600 ECC CL11
Additional Intel Quad-Port Gigabit NIC

Chasis:
Supermicro 826E16-R1200LPB 2U (link)
12-port SAS expander included
Fully redundant 2x 1200w PSUs

Raid:
Adaptec 6405 - SASII - 512mb cache
Flash backup module for cache
Connected to the 12-port expander

HDD:
2x 250GB WD RE4 - Raid1
2x 500GB WD RE4 - Raid1
5x 1TB WD RE4 - Raid5
1x Old Intel X25-M SSD - for testing

Total useful capacity: 4,75 TB


Backup Storage:
Older Synology RS411
Budget increases:
Synology RS812+

2x 2TB WD RE4 - Raid1
2x 2TB WD RE4 - Raid1

Total backup capacity: 4TB


Notes:
There will be 2 Virtualization hosts, with no local storage; they will boot and operate off the NAS. Onboard NICs will server as iSCSI boot devices, the additional 4-port Intel NICs will be used for VM traffic and iSCSI to the NAS for the VM datastores.

For the Main Storage server, the Supermicro chasis provides a nice 12-port SAS expander, so all we need is a 4-port SAS controller, connecting to the expander with a single SAS cable. 12 HDD bays in 2U, redundant power, good performance, and all much cheaper then buying a "label" SAN appliance.

For backup storage I went with an older cheap Synology NAS that we already had in inventory, since it will just sit idle 98% of the time.
See the 5. Delays post for backup storage upgrades.

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
post #3 of 15 Old 09-06-2013, 03:15 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Boxes and Parts

So a few boxes arrived today, still dont have all the parts tho.

2x 1U chasis for the vHosts in boxes:


And unpacked and stacked, to wait for their parts:


Most of the parts for the Main Storage server, chasis already unpacked:


The redundant PSUs and fans for the 2U storage chasis:


And what makes a storage server a storage server, the raid controller with the backup array:

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
Sponsored Links
Advertisement
 
post #4 of 15 Old 09-06-2013, 03:54 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
To 10 GBE or not to 10 GBE.

So while waiting for parts, I did some research today. Basicly, I really need more network throughput.

All of the MBs used have 2x 1GBE ports, but that is not enough. Each vHost will hold 10-15 virtual machines, and each will use quite a bit of network bandwidth. You can bond the ports into an aggregation group (therefore getting 2 GBit/s pipe), but will not be enough. It would suffice for the virtual machines, but since all the storage is attached over the network, all the disk traffic needs to pass the network as well. Im hoping the SAN will be able to put out around 300 MByte/s, and that needs to pass over the network to the vHosts, and must not be slowed down by the traffic from virtual machines.

So I looked at a cheapest way to enable the individual hosts for 10 GBE networking. And man, its not cheap. Lets look at the cheapest solution I found:
1x 8-port 10 GBE switch - link - 932$
3x 1-port 10 GBE NICs - even going off eBay - 3x 150$

So for that price, I can have a whole another 1U vHost. Another option would be to buy Infiniband cards/switches off eBay, which are not really 10 GBE, but 7.5 GBit/s. Used Infiniband equipment is really cheap these days. The problem with that is that not all Infiniband cards are supported by vmWare ESXi (the hypervisor we will be running), and even so, iSCSI needs to work over an IP network, which infiniband is natively not. Of course there is IPoIB (IP over Infiniband), but that just adds complexity and potential issues.

Not to mention, I would have to buy stuff off eBay, used, and without proper long-term warranties. So in the end, Quad-Port GBE NICs are what I will be going with.

Each server will have 2 onboard GBE ports and 4 additional GBE ports on the NICs. This will give me max throughput of 6GBit/s per server, which should be enough, keep cost down, and hopefully work as required.

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
post #5 of 15 Old 09-06-2013, 07:25 AM
Overclocker
 
stryfetew's Avatar
 
Join Date: Jul 2013
Location: Georgia
Posts: 192
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 2
I can dig it!! Subbed!!

devil.gif [Official] AMD R9 280X / 280 & 270X Owners Club devil.gif
stryfetew is offline  
post #6 of 15 Old 09-07-2013, 10:03 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Had some time today, and while still waiting for parts, I started on some case and cabling work.

Supermicro packs quite a few accessories with the cases, pretty much everything you could possible need.


One of the things that they give you is an optional USB/Serial panel, which would give you direct Serial console over RS232 COM in the front of the chasis. Since I dont need/want USB or Serial to be exposed, I wont mount it.


Here is how the chasis looks like when open:


The first thing I noticed is that the cabling on the PSU is quite bad. Tangled wires everywhere...


3 screwes later, the PSU is out.


I considered sleeving the cables, but since this is a server that would be quite a waste of time, and a possible issue if I needed to RMA something. So I settled for a re-zip tying the cables. Here is one PSU re-zip tied, and another in "factory default". It might not look like much of a change from pictures, but it makes quite a difference when actually working with the cables. Since this is a 1U chasis, there will be no room to spare later on.


Another thing that I managed to do today is add the rack-mount rails. These go on the side of a chasis, and then the whole chasis slides into a rail in the rack. Makes it really easy to pull out a server if you ever need to from a rack. All accessories and screws were included, so just a few screws and all is taken care of.



So thats where Im finishing today, PSU cables re-done, and rails mounted.

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
post #7 of 15 Old 09-13-2013, 10:34 AM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Not much of an update today, but a few more parts arrived:

Got all the parts needed for one of the vHosts, and rack rails for both the vHost servers.
Also PCI-E risers, which are needed to mount the NICs into the 1U cases.

Still missing parts for the other vHost, the 4-port NICs and a few parts for the main storage server.
I am off at a conference next week, so an update will come in a week. Hopefully all the parts are here by then.

I also got a slight budged increase for the project, so I will be updating the backup NAS to a new Synology RS812+, which will have about tripple the performance of the older RS411 I was going to use.

This means it would actually be doable to cluster the NASes for some of the volumes for full Fault Tolerance, without losing much performance. The RS812+ peaks at 200MByte/s, so for losing about 30%-40% on storage performance, I can have a fully Fault Tolerant datastore for volumes I really need to keep as High Availibility.

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
post #8 of 15 Old 09-13-2013, 05:59 PM
WaterCooler
 
alpenwasser's Avatar
 
Join Date: Jul 2013
Location: Switzerland
Posts: 557
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 41
alpenwasser is offline  
post #9 of 15 Old 09-13-2013, 06:08 PM
 
Joa3d43's Avatar
 
Join Date: Nov 2012
Location: 'Canada' - down here looking up there
Posts: 3,679
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 155
Joa3d43 is offline  
post #10 of 15 Old 09-27-2013, 03:13 PM - Thread Starter
2.4ghz
 
tomaskir's Avatar
 
Join Date: Feb 2006
Location: Slovakia
Posts: 428
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 31
Got the rest of the parts I needed, so lets build some servers.

Lets start with vHost 1, six core Xeon CPU (+ HT) with 32GB registered ECC RAM:


The Intel 1U heatsink that I ordered comes with paste pre-applied (not the crap paste on the desktop coolers), since this is a server, I will just rely on Intel here, can always re-paste it later if needed.


Here's the MB with the CPU, heatsink and RAM. The heatsink is just 4 philips screwes.


Normal I/O Shields dont work in these 1U chasis, and the one that was pre-bundled with the chasis didnt fit my MB. An optional $4 shield for my MB series was available, and works great.


You will notice that the mounting for the motherboard is not pre-installed, since server MBs come in lots of different form factors. Mine is just eATX, and the mounts are included with the chasis. They go in from the bottom, and are then secured from the top.


A nice tool was included with the chasis to get the work done.


Had to pull out the fans to be able to put the MB in. There really is not much room in the chasis...


The cabling is just a matter of connecting 3 cables powering the MB, and getting them out of the way of the airflow.


Connect the Front Pannel cable and the Chasis Intrusion sensor and this is it for today, one of the vHosts finished.


One more pic just because...

18,456 DMarks - single card sneaky.gif - that used to be a lot tongue.gif

RIP my lovely board - http://www.overclock.net/intel-gener...tml#post960351
tomaskir is offline  
Reply

Quick Reply

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off