Overclock.net › Forums › Case Mods & Cases › Builds & Case Mods › Build Logs › [Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$
New Posts  All Forums:Forum Nav:

[Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$

post #1 of 91
Thread Starter 
I'm getting married next year and thus this is my last opportunity to spend a good chunk of money on my "toys" before I need wifey approval for everything (queue the tears and boos). I've been saving up for a network overhaul for some time now (since before I proposed thumb.gif)


My plan is to build a 3-node VMware vSAN cluster for the purpose of having full redundancy of all my VMs and Docker appdata. I want to be able to take a single server off-line while still keeping the Linux VM running my media dockers (mainly Plex) online as well as the rest of my VMs (Windows AD, DNS, etc.). I also just want to generally play around with VMware clustering, HA, and vMotion for my own personal knowledge as I'd like to implement some of this at work next year.

So without further ado, this is the hardware that will be going into a new server rack that will sit in my home office (I will build out a full network diagram once I have everything in place and will update you all with that in a subsequent post).

** Check the second post of this thread I will post what my current network looks like as it stands just so you can see where I'm coming from. **

Rack: 22U Linier Server Rack



I will be drilling custom holes in the back door of this rack to install 3 x 120mm exhaust fans. I also plan to (at least attempt to) install some sound dampening materials on the back, side, and top panels of the rack.


Equipment Going in the Rack (from bottom to top)

Cooling: AC Infinity CLOUDPLATE T7 - [2U]

UPS: CyberPower 2U 900w rack mountable UPS - [2U]

ESXi Node #1 & #2 (Identical Nodes): Hyperconverged Computing/Storage Nodes - [2U each]


ESXi Node #3: vSAN Storage Only Node + UnRAID VM - [4U]


ESXi Node #4: vSAN Storage Only Node + UnRAID VM - [3U]


10GbE Switch: Dell X1052 Smart Managed Switch (48 x 1Gb, 4 x SFP+ ports) - [1U]

10GbE Switch: Dell X4012 Smart Managed Switch (12 x SFP+ ports) - [1U]

pfSense Box: Router and Firewall - [1U]


24-Port Patch Panel: Cable Matters Rackmount or Wallmount 24-Port Cat6 RJ45 Patch Panel - [1U]


I will of course post pics once equipment starts arriving. Fun times ahead smile.gif.
Edited by PuffinMyLye - 7/22/16 at 11:09am
post #2 of 91
Thread Starter 
As promised, this is what my network looks like at this moment before this overhaul.


Site #1 Network (My Condo)

Network Devices:
  • pfSense Firewall
  • Netgear ProSafe Plus switches (vlan tag support)
  • UniFi AC Access Point
  • Various Media Streaming Devices (Chromecasts, Tivo BOLT, Samsung SmartTV)

Server:

56TB UnRAID Server running an assortment of dockers (Apache, Plex, PlexPy, PlexRequests, Sonarr, CP, NZBGet, Hydra, Madsonic, Miximux, UniFi, etc.). Also run a bunch of test VMs inside the build-in KVM hypervisor.

PC:

Main everyday workstation. Was built for gaming but I never game on it. Mainly just use to for web browsing and as a terminal for configuring/testing on my UnRAID server.


Site #2 Network (Parents House)

Network Devices:
  • pfSense Firewall
  • Netgear ProSafe Plus switch
  • Linksys E1000 AP (DD-WRT)

Server:

56TB UnRAID Server backs up to this over site-to-site VPN nightly)
Edited by PuffinMyLye - 3/24/16 at 11:21am
post #3 of 91
Thread Starter 
post #4 of 91
Thread Starter 
Completed build here.


Edited by PuffinMyLye - 5/24/16 at 11:29am
post #5 of 91
Subbed! wheee.gif
     
CPUMotherboardGraphicsHard Drive
AMD Threadripper 1900x Asus Prime X399-A Asus GTX 980 ti Matrix Samsung 960 Evo 
Hard DriveHard DriveHard DriveOS
Intel 750 Series 400GB U.2 Samsung 850 Pro 512 GB Samsung 850 Pro 512 GB Windows 10 Pro 
PowerCase
Seasonic 1050w 80+ Gold Corsair Air 740 
CPUCPUMotherboardGraphics
i7 6800k Xeon e5-1620 V3 EVGA Micro 2 Asus GTX 780 
GraphicsGraphicsRAMRAM
Sapphire Fury X Sapphire RX 480 Kingston - FURY 8GB (2 x 4GB) DDR4-2400 ADATA - XPG Z1 8GB (2 x 4GB) DDR4-2400  
Hard DriveHard DriveCoolingOS
SanDisk - SSD PLUS 240GB Kingston - SSDNow V200 be quiet! - PURE ROCK Windows 10 Pro 
PowerCase
Corsair HX850i Thermaltake Core V21 
CPUMotherboardGraphicsRAM
i5 6600k Asus Z170i Pro Gaming Sapphire R9 Nano Kingston FURY 16GB (2 x 8GB) DDR4-2400 
Hard DriveCoolingOSMonitor
Samsung 950 Pro 512 GB be quiet! ​PURE ROCK ​SLIM Windows 10 Pro ViewSonic XG2401 
KeyboardPowerCaseMouse
MK Disco RGB TKL (KBT Brown) Athena AP-MFATX40 400W Flex-ATX Lian-Li PC-TU 100B Razer Mamba TE 
Mouse PadAudioAudioAudio
Razer Firefly Cloth Schiit Fulla 2 Beyerdynam​ic DT 770 ​Pro 80 ohm Antlion Audio ModMic 4 w/ Mute 
OtherOtherOtherOther
Noctua NF-A9x14 (x2) Fractal Design FD-FAN-SSR2-92 Fractal Design FD-FAN-SSR2-60 (x2) Fractal Design GP12-WT 
  hide details  
Reply
     
CPUMotherboardGraphicsHard Drive
AMD Threadripper 1900x Asus Prime X399-A Asus GTX 980 ti Matrix Samsung 960 Evo 
Hard DriveHard DriveHard DriveOS
Intel 750 Series 400GB U.2 Samsung 850 Pro 512 GB Samsung 850 Pro 512 GB Windows 10 Pro 
PowerCase
Seasonic 1050w 80+ Gold Corsair Air 740 
CPUCPUMotherboardGraphics
i7 6800k Xeon e5-1620 V3 EVGA Micro 2 Asus GTX 780 
GraphicsGraphicsRAMRAM
Sapphire Fury X Sapphire RX 480 Kingston - FURY 8GB (2 x 4GB) DDR4-2400 ADATA - XPG Z1 8GB (2 x 4GB) DDR4-2400  
Hard DriveHard DriveCoolingOS
SanDisk - SSD PLUS 240GB Kingston - SSDNow V200 be quiet! - PURE ROCK Windows 10 Pro 
PowerCase
Corsair HX850i Thermaltake Core V21 
CPUMotherboardGraphicsRAM
i5 6600k Asus Z170i Pro Gaming Sapphire R9 Nano Kingston FURY 16GB (2 x 8GB) DDR4-2400 
Hard DriveCoolingOSMonitor
Samsung 950 Pro 512 GB be quiet! ​PURE ROCK ​SLIM Windows 10 Pro ViewSonic XG2401 
KeyboardPowerCaseMouse
MK Disco RGB TKL (KBT Brown) Athena AP-MFATX40 400W Flex-ATX Lian-Li PC-TU 100B Razer Mamba TE 
Mouse PadAudioAudioAudio
Razer Firefly Cloth Schiit Fulla 2 Beyerdynam​ic DT 770 ​Pro 80 ohm Antlion Audio ModMic 4 w/ Mute 
OtherOtherOtherOther
Noctua NF-A9x14 (x2) Fractal Design FD-FAN-SSR2-92 Fractal Design FD-FAN-SSR2-60 (x2) Fractal Design GP12-WT 
  hide details  
Reply
post #6 of 91
Congrats on the marriage. I got married this past fall, but wifey approval hasn't been enabled yet. We are expecting our first child come July, so I imagine things will change then. Likewise, I am in the process of overhauling my setup (dual Xeon E5-2670 + 192GB RAM + 12 1TB SSD's build for my hypervisor, and use my current box with dual L5640's and 96GB RAM for storage only, using CentOS 7 + ZFSonLinux).

Looks like an exciting build, and something I considered doing (though with KVM+Ceph instead of ESXi and vSAN). In the end, I decided I didn't want to have to maintain something that complicated (I mean, it's not that complicated, but still) and decided a simple dedicated hypervisor node will do.

Couple questions...why both the Dell X1052 and X4012? I love the X4012, and all, but at $1400...why the need? You will have 3 nodes, and the X1052 has 4 SFP+ ports. I could see dedicated connection for the vSAN, and a separate one for VM traffic or even LACP. Have you thought about doing 2 Dell X1052, which would still give you switch redundancy and 2 SFP+ per node, plus SFP+ to stack the two switches.
I'm probably going to buy a X1052 myself, since I plan to do 10Gbit with both of my boxes. I will also do 4 GbE NIC's in LACP for VM traffic, since I see no need for dedicated 10Gbit for my ~25 VM's. My storage/NFS shares on the other hand, 10Gbit please!

Also, if you don't mind my asking, how much are those HGST SLC drives costing you? From what I can tell, they are over $1k/ea. Great performance and life, no doubt, but realistically, wouldn't something like an Intel DC S3710 do you just fine over (a minimum of) 5 years? With my new setup, I'm going to be adding a SLOG and L2ARC to my storage box (18 5TB drives, 3 x 6-drive raidz2 vdev's for 60TB usable, and enough space to add a 4th 6-drive vdev), and planning on a 200GB S3710, or a 400G S3610 for that SLOG drive. Probaly a 400GB S3610 for my L2ARC drive. On my hypervisor box, I'll probably run hardware RAID with up to 12 1TB SSD's (starting with 6), in a RAID 10 configuration. I will have 12 2.5" bays left in that box, so considering doing 12 3TB 5400RPM spinners as secondary storage for my VM's (i.e., torrent/seedbox location, SABnzbd download location, splunk indexes, etc). 12 5400RPM drives should yield a respectable sequential r/w, and I figure at least 800 IOPS with 18TB of space. I just need to quit spending my hobby money on firearms and finish my new home build (though I'm in love with my new Vz.58, wink.gif )

Anyway...sorry to talk so much about my in progress setup. What you have planned will be really nice!
Edited by tycoonbob - 3/24/16 at 7:56pm
post #7 of 91
Also curious on the Dell ethernet switch selections. Personally I'd look at a used Cisco nexus 5000 - they can be had for a pittance on Ebay, and offer a ton more 10Gbit ports, plus can do FCoE if you ever wanted to go down that path. That and they expose you to the Nexus operating environment if you ever wanted to get experience in that environment.
My System
(13 items)
 
   
CPUMotherboardGraphicsRAM
Intel Core2 Quad Q6600 Asus Maximus Formula Zotac Geforce 8800GT 8GB Kingston DDR2800 
Hard DriveOptical DriveOSMonitor
80GB Intel X25M G2 Ben-Q DVD+-RW Windows 7 Ultimate Dual Dell 19" Trinitrons + 1 Hanns-G 19" LCD 
KeyboardPowerCaseMouse
MS Natural Multimedia Rosewill RX750-D-B Antec 900 Razer Diamondback 
Mouse Pad
Razer eXactMat 
CPUCPUMotherboardRAM
Intel Xeon Intel Xeon 440BX Desktop Reference Platform Kingston ValueRam - 72GB 
Hard DriveHard DriveHard DriveOS
Seagate Barracuda ES.2 Hitachi HUS724040ale640 Micron M500DC VMware ESXi 6.0 
Other
Synology ds2015xs 
  hide details  
Reply
My System
(13 items)
 
   
CPUMotherboardGraphicsRAM
Intel Core2 Quad Q6600 Asus Maximus Formula Zotac Geforce 8800GT 8GB Kingston DDR2800 
Hard DriveOptical DriveOSMonitor
80GB Intel X25M G2 Ben-Q DVD+-RW Windows 7 Ultimate Dual Dell 19" Trinitrons + 1 Hanns-G 19" LCD 
KeyboardPowerCaseMouse
MS Natural Multimedia Rosewill RX750-D-B Antec 900 Razer Diamondback 
Mouse Pad
Razer eXactMat 
CPUCPUMotherboardRAM
Intel Xeon Intel Xeon 440BX Desktop Reference Platform Kingston ValueRam - 72GB 
Hard DriveHard DriveHard DriveOS
Seagate Barracuda ES.2 Hitachi HUS724040ale640 Micron M500DC VMware ESXi 6.0 
Other
Synology ds2015xs 
  hide details  
Reply
post #8 of 91
Cant wait to see how this build log turns out..
I am new to servers so all this is Chinese to me, but i have to ask.. with all this storage, ram, etc.. what exactly are you guys using it all for??

I mean theres thousands of dollars in part here.
post #9 of 91
Thread Starter 
Quote:
Originally Posted by tycoonbob View Post

Congrats on the marriage. I got married this past fall, but wifey approval hasn't been enabled yet. We are expecting our first child come July, so I imagine things will change then. Likewise, I am in the process of overhauling my setup (dual Xeon E5-2670 + 192GB RAM + 12 1TB SSD's build for my hypervisor, and use my current box with dual L5640's and 96GB RAM for storage only, using CentOS 7 + ZFSonLinux).

Looks like an exciting build, and something I considered doing (though with KVM+Ceph instead of ESXi and vSAN). In the end, I decided I didn't want to have to maintain something that complicated (I mean, it's not that complicated, but still) and decided a simple dedicated hypervisor node will do.

Couple questions...why both the Dell X1052 and X4012? I love the X4012, and all, but at $1400...why the need? You will have 3 nodes, and the X1052 has 4 SFP+ ports. I could see dedicated connection for the vSAN, and a separate one for VM traffic or even LACP. Have you thought about doing 2 Dell X1052, which would still give you switch redundancy and 2 SFP+ per node, plus SFP+ to stack the two switches.
I'm probably going to buy a X1052 myself, since I plan to do 10Gbit with both of my boxes. I will also do 4 GbE NIC's in LACP for VM traffic, since I see no need for dedicated 10Gbit for my ~25 VM's. My storage/NFS shares on the other hand, 10Gbit please!

Also, if you don't mind my asking, how much are those HGST SLC drives costing you? From what I can tell, they are over $1k/ea. Great performance and life, no doubt, but realistically, wouldn't something like an Intel DC S3710 do you just fine over (a minimum of) 5 years? With my new setup, I'm going to be adding a SLOG and L2ARC to my storage box (18 5TB drives, 3 x 6-drive raidz2 vdev's for 60TB usable, and enough space to add a 4th 6-drive vdev), and planning on a 200GB S3710, or a 400G S3610 for that SLOG drive. Probaly a 400GB S3610 for my L2ARC drive. On my hypervisor box, I'll probably run hardware RAID with up to 12 1TB SSD's (starting with 6), in a RAID 10 configuration. I will have 12 2.5" bays left in that box, so considering doing 12 3TB 5400RPM spinners as secondary storage for my VM's (i.e., torrent/seedbox location, SABnzbd download location, splunk indexes, etc). 12 5400RPM drives should yield a respectable sequential r/w, and I figure at least 800 IOPS with 18TB of space. I just need to quit spending my hobby money on firearms and finish my new home build (though I'm in love with my new Vz.58, wink.gif )

Anyway...sorry to talk so much about my in progress setup. What you have planned will be really nice!

What's up Bob! Long time no see. Always happy to shoot the breeze with you about technology so don't hold back. I've seen your tech shift over the years as you used to be an all Windows guy and then you shifted a lot towards Linux. I use a pretty even mix of Windows and Linux servers so I'm always interested in your thoughts.

As for the X4012, I'll admit that's the one item I have YET to buy. I was originally thinking I'd get it to have dual 10Gb links to each node (so that's 6 right there) along with dual links to my PC (so that makes 8). Then I'd have 4 left over for future expansion. However, I will admit, I'm having seconds thoughts on that since the cost is very high and I really can get away with single 10Gb links to each node along with dual 1Gb links for backup redundancy. I never really thought about getting a second 1052 because the 48 gig ports is already well more than I need (not even using 24 at this time) but it probably does make more sense given the redundancy it would add. If these switches were stackable I'd probably jump on that idea right now but I'm going to think more about it this weekend before I make that decision.

With regard to the HGST SLC drives, you won't believe the deal I got on them. I bought them used off eBay last week for $99 EACH!! Just an absolutely insane deal and I've confirmed they don't have much data written to them (all under 100TB which is NOTHING considering what they are rated for).

I considered going with a dedicated shared storage appliance (using ZFS with a SLOG) instead of vSAN but the more I thought about it the more I realized the whole reason I started going down this new build path was to add redundancy of my VM/Docker storage so that I could take a node offline and nothing would be down. The shared storage route would yield me more space and probably slightly better performance but in the end redundancy won out for me.


Quote:
Originally Posted by mbreitba View Post

Also curious on the Dell ethernet switch selections. Personally I'd look at a used Cisco nexus 5000 - they can be had for a pittance on Ebay, and offer a ton more 10Gbit ports, plus can do FCoE if you ever wanted to go down that path. That and they expose you to the Nexus operating environment if you ever wanted to get experience in that environment.

One of the main considerations on the switch choices is size and noise. All this gear is going to be sitting in a medium depth (24" max mountable depth) rack just a few feet from my desk in an open door office just office my living room. The Nexus' are huge and noisy. I also work with Cisco switches everyday at work so I wanted to play around with some different brands.


Quote:
Originally Posted by JMattes View Post

Cant wait to see how this build log turns out..
I am new to servers so all this is Chinese to me, but i have to ask.. with all this storage, ram, etc.. what exactly are you guys using it all for??

I mean theres thousands of dollars in part here.

Well first and foremost, I'm a media junkie. My Plex server is my pride and joy and is really a "production" server when you think about how much use it gets both by my fiance and I as well as our families and very close friends remotely. Lets just say that any downtime of this server is borderline unacceptable and could lead to fights on our wedding night rolleyes.gif.

So redundancy of both my storage and VMs is important to ensure that my Plex server stays up at all times as well as the other VMs I use for my home network such as AD, DNS, etc. I've come to rely on these services at home and so has my fiance (we use roaming profiles, H drives, etc. on all our Windows clients at home). And make no mistake about it, I realize that for the most part what I'm buying is more than I need at this moment. But remember...I'm getting married next year. There won't be any more $$$ splashes on tech for a while as we are going to have our hands full buying a house in the fall and then trying to start a family. So this is it for a while.
post #10 of 91
Subbed, love networking builds, we dont see enough of those here thumb.gif
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Build Logs
Overclock.net › Forums › Case Mods & Cases › Builds & Case Mods › Build Logs › [Build Log] FINAL Bachelor Splurge - New Server Rack Build - Goodbye $$$