Overclock.net › Forums › Specialty Builds › Servers › Home Infrastructure ReDesign
New Posts  All Forums:Forum Nav:

Home Infrastructure ReDesign - Page 2

post #11 of 54
Thread Starter 
Alright, I think I have figured out what I am going to do. I'm looking for comments on if it could be done better though.

Primarily, I will be using the C6100, my existing C1100, and my existing storage server.

C6100
Microserver #1:
x2 Xeon L5520
32GB RAM
60GB SSD (XenServer install)
PCIe dual gigabit NIC (total of 4 NICs)
XenServer

Microserver #2:
x2 Xeon L5520
32GB RAM
60GB SSD (XenServer install)
PCIe dual gigabit NIC (total of 4 NICs)
XenServer

Microserver #3:
x2 Xeon L5520
32GB RAM
60GB SSD (Server 2012 install)
500GB Samsung 840 SSD (VMs)
Server 2012 with Hyper-V
total of 2 NICs

Microserver #4:
x2 Xeon L5520
32GB RAM
60GB SSD (Server 2012 install)
500GB Samsung 840 SSD (VMs)
Server 2012 with Hyper-V
total of 2 NICs

C1100
x2 Xeon L5520
36GB RAM
160GB 7200RPM Drive (Server 2012 install)
x2 3TB 7200RPM Drives (Storage Spaces Mirror) (VMs)
Server 2012 with Hyper-V
total of 2 NICs

Storage Server
Build log in sig...
RAID 60 with 3TB drives (media storage and backups)
RAID 10 x4 2TB Hitachi 7K3000 (SAN Storage -- LUNs presented to XenServer Nodes)
Add in PCIe dual port gigabit card (total of 6 NICs)

I would build out a storage network with 2 NICs from each XenServer node, and 4 NICs from Storage Server and utilize MPIO (with the iSCSI LUNs). All VMs on my XenServer cluster would run from the RAID 10. The other 2 NICs from the XenServer boxes would be in LACP on my LAN. The other two NICs from my Storage Server would also be in LACP on my LAN. My C1100 would have both NICs in LACP on my LAN to be used as a Hyper-V Replica server.

With this setup I get to play with XenServer clustering, SAN storage, iSCSI, MPIO, and LACP. I have HA for XenServer, and Replication for Hyper-V VMs. I will also continue using Microsoft DPM to backup VMs to my RAID 60 array. My Hyper-V VMs (aka, my production stuff) would run locally from a single 500GB SSD. No RAID, but nightly backups. All Windows VMs would be kept to a minimum in size, and will use iSCSI Targets if I need additional storage on those VMs (ie, my Torrent and Usenet VMs which each have a 200GB iSCSI LUN currently). I may even though in dual NICs on both Microserver #3 and #4, and put them on my storage network as well and carve some new LUNs. 4TB of RAID 10 storage is a lot for VMs, and should have fairly good performance (compared to how most of my VMs are currently running on a single 7200RPM drive in their respective host).

Questions, comments, concerns that I am missing?

Thanks!

Oh...the two white box Hyper-V hosts would be scrapped and used to build a new firewall server in a half rack 1U chassis.

EDIT:
After some more thought...I'm looking at a lot of network ports being required. 14 NICs for my storage network, 18 for servers and appliances, 8 for management NICS (IPMI), and 6 or so more for PCs/HTPCs/etc. Not to mention another 3-5 for my DMZ network (will be on a 16 port unmanaged gigabit switch).

Thinking about buying 3 24 port managed gigabit switches (Dell PowerConnect 5424s, maybe?). All switches would be connecting to each other with 2 cables in LACP, for redundancy and performance. Should leave me about 20 ports free or so.

Sounds like a lot and way more than I would need, but I think I want to do it. smile.gif

EDIT2:
Scratch that. A single 48 port would do. I will put the 8 IPMI NICs on my 16 port unmanaged switch, and use my 5 port unmanaged switch for my DMZ. Should leave me with at least 16 free ports on a 48 port.

Anyone have a better recommendation than a PowerConnect 5448? Can't really beat it for the price.
Edited by tycoonbob - 4/4/13 at 4:33pm
post #12 of 54
@OP

Your cunning plan sounds positively cunning. biggrin.gif


Microserver #3
Microserver #4

Why not go with 2x 256GB SSDs in RAID 1, as opposed to a single 500GB? As much as you'll have nightly backups, I have never heard of SSDs dying gracefully. "instant", "unexpected" and "sudden" are the words that come to mind. Ironically, such words seem to match the speed of the SSD...tongue.gif


Switches
Would it not be better to keep a separate SAN switch, as opposed to a converged solution? Even if it only makes management easier, it'd be worth it, at least to me. smile.gif
Quote:
I may even throw in dual NICs on both Microserver #3 and #4, and put them on my storage network as well and carve some new LUNs.

You mean have them serve up iSCSI targets? If so, where would these targets be? On the C6100?
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #13 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@OP

Your cunning plan sounds positively cunning. biggrin.gif


Microserver #3
Microserver #4

Why not go with 2x 256GB SSDs in RAID 1, as opposed to a single 500GB? As much as you'll have nightly backups, I have never heard of SSDs dying gracefully. "instant", "unexpected" and "sudden" are the words that come to mind. Ironically, such words seem to match the speed of the SSD...tongue.gif


Switches
Would it not be better to keep a separate SAN switch, as opposed to a converged solution? Even if it only makes management easier, it'd be worth it, at least to me. smile.gif
You mean have them serve up iSCSI targets? If so, where would these targets be? On the C6100?

Very cunning indeed. biggrin.gif

x2 256GB SSDs would be half the space of the 500GB. Nightly backups are part of it, but they would also be replicated to my C1100 (which would be running the Hyper-V Replica role). If the SSD failed, the Replica VM would fail over with only a few seconds of down time, and I wouldn't even need the backups. Replace the dead SSD (hopefully still under warranty) and failback the VMs. Done. The Hyper-V Replica server would be running on two 7200 RPM drives in a Storage Spaces Mirror.

A separate switch would be nice for the storage network, but I started to think of the power consumption and the cost of a separate switch. If Dell made a 16 port version of the 5400 switches, I would probably opt for one of those (probably would be around $100 used too). The 5448 has plenty of power, and a separate VLAN should suffice for my storage network.
Have a recommendation of a managed 16 port gigabit switch that supports LACP/MPIO, Jumbo Frames, VLANs for under $150 (new or used)?

The RAID 10 on the Storage Server will be served to the XenServers for sure. They would probably share a single LUN. If I go with 2 more NICs in MicroServer #3 and #4, they would each connect to their own LUN served up on that same RAID 10 on the Storage Server.

Confusing when its all just words, I know. Once I get a plan in place I am going to draw it out on Visio and start a build thread.

I think a single PowerConnect 5448 would do perfect for everything, except for my DMZ network and my IPMI NICs, which would each have their own unmanaged switch (which I already own).
post #14 of 54
@tycoonbob


Dell PowerConnect 2716
Specs

Source

Source $89


3Com Baseline 3CBLSG16
Specs

Source $89

NetGear ProSafe GS716T
Specs

Source $155
Edited by parityboy - 4/5/13 at 6:59am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #15 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@tycoonbob

Dell PowerConnect 2716

Source

Source

Hmm...good looking switch, and great price. I really am liking the 5400 series though, do to them being "iSCSI Optimized". I'm trying to find more information into what this actually means.

Do you know the LACP specs for this 2716? How many groups and how many connections per group? I wasn't able to find anything about that from a quick search, but can search more later. I need at least 6 groups and at least 4 per group.
post #16 of 54
@tycoonbob

Updated my post. I only know from the specs what these switches are capable of, but on the surface at least they seem to address some of your needs.
Edited by parityboy - 4/5/13 at 7:04am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #17 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@tycoonbob

Updated my post. I only know from the specs what these switches are capable of, but on the surface at least they seem to address some of your needs.

Yes, I agree and thanks for the others you listed. I have had a bad experience with 3com, so I won't be buying one of those. I'm thinking a Dell or HP switch honestly.

For $100-125, I can get a PowerConnect 5324 which I know supports at least 8 LACP groups with at least 6 in each (a coworker uses one at home). PowerConnect 5424 is about $200, so I may end up going with one of those instead, since thy have that iSCSI Optimization or whatever (which looks like the switch can detect iSCSI traffic and automatically assign it a high QoS). I figured a 16 port would save some money, but it doesn't really look like it.

A PowerConnect 2716 would probably work, but I would rather opt for something of a newer generation. Also, the 2716 is a L2 switch when these 5300/5400 series are L3...so why not?

I need to do some research on some HP ProCurve switchs and see what is comparable to the PowerConnect 5300/5400 series stuff.

Thanks again!
post #18 of 54
Quote:
A PowerConnect 2716 would probably work, but I would rather opt for something of a newer generation. Also, the 2716 is a L2 switch when these 5300/5400 series are L3...so why not?

Which is precisely how they can detect iSCSI traffic in the first place, no doubt. I think those things are great for converged solutions, but since I like a rack full of hardware pr0n, I'd likely get a separate switch anyway. tongue.gif ProCurves also have a large fan base, and are well regarded. smile.gif


HP ProCurve eBay selection

Edited by parityboy - 4/5/13 at 9:03am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #19 of 54
@OP - Unrelated question, what reasons made you chose IPFire?
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 860 Asus P7P55D-E Pro MSI GTX560 Ti TwinFrozr II MSI GTX560 Ti TwinFrozr II 
RAMHard DriveHard DriveHard Drive
Corsair 8GB DDR3 OCZ Vertex 3 Western Digital Caviar Black Western Digital Caviar Green 
Hard DriveOptical DriveCoolingOS
Samsung 840 Pro Lite-On 24x DVD-RW CoolerMaster V8 Windows 8.1 Professional 
OSMonitorMonitorMonitor
Debian 7.1 Samsung S22B350H Samsung S22B350H Samsung S22B350H 
KeyboardPowerCaseMouse
Ducky Shine II Corsair HX850 CoolerMaster Storm Enforcer Logitech M500 
Mouse PadAudio
Razer Goliathus Microsoft LifeChat LX 3000 
  hide details  
Reply
post #20 of 54
Thread Starter 
Quote:
Originally Posted by dushan24 View Post

@OP - Unrelated question, what reasons made you chose IPFire?

Prior to IPFire, I used Untangle for a long time. I was tired of seeing all the paid features that I wanted to use, so I started trying other UTMs. While IPFire might not be the prettiest, it seems to be running great for me and uses less resources than Untangle (even though it runs on a quad core with 5GB of RAM). Once I get around to building a new firewall (and retire this power hungry HP WX4400 Workstation), I will be trying various other UTMs like Sophos to see what I like. I may even try other firewall software (non-UTMs) like PFSense and M0nowall, but will likely stick with some sort of UTM.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Home Infrastructure ReDesign