Overclock.net › Forums › Specialty Builds › Servers › Home Infrastructure ReDesign
New Posts  All Forums:Forum Nav:

Home Infrastructure ReDesign

post #1 of 54
Thread Starter 
Hello everyone. Looking for some advice here.

I am wanting to redo my network to allow me to play with other technology, specifically for studying/labbing/demoing to a client. My current home network is more than most small companies, but I have 3 Hyper-V hosts (2 identical with AMD FX-8120 and 16GB of RAM -- third is a Dell C1100 with dual Xeon L5520 and 36GB of RAM). I also have my Storage server and my firewall (running IPFire).
On my network, I have 3 desktops on my internal LAN, along with 2-4 laptops, 2-4 mobile phones, 2-4 tablets. My WLAN consists of 2 UniFi APs. I also have a DMZ network, with my Vonage box, my DirecTV DECA adapter, and one of my Hyper-V hosts has a NIC on there (internet facing web server). I currently have 27 VMs running, with 2 more spinning up right now.

As some/most of you may know, I work as a consultant primarily around Microsoft technology. I drive a lot of you nuts with my "worshiping of Microsoft", even though I don't mean too. We have recently partnered with both Citrix and VMware (once again), and I will be doing a lot of studying and labing for Citrix to get 3 CCAs this year (XenServer, XenDesktop, and XenApp) while also playing with NetScaler, and XenMobile. Months later I will be testing for my CCAA. Long story short, I need new equipment for Citrix labbing.

What I am thinking:
The x2 Hyper-V hosts with the AMD FX-8120s, scrap them. Keep the C1100. Buy a Dell C6100 (4 microservers in a 2U chassis) and load each microserver with dual Xeon L5520s and 36GB of RAM. Two of the Microservers would running XenServer in a cluster, with XenDesktop on top of that. I would have 4 VMs on that cluster for XenApp. The other two microservers would run Server 2012 with Hyper-V, non-clustered. I would have two additional XenApp VMs there to have a cross-virtualization platform XenApp Farm. The Dell C1100 would stay around to run Server 2012 with Hyper-V, as a Hyper-V Replica server, which would replicate most of the VMs running on my other Hyper-V boxes (my DCs, System Center Lab, Media VM, and some other stuff).

Problem/question that I have, is surrounding how I should do my network and storage. Yes, I have a storage server, but I do not plan to run all my VMs over iSCSI. 90%+ of my VMs will run off local storage, but some will have iSCSI Targets for data storage (download dir for my Usenet VM, seeding dir for my Torrent VM, backup drive for my DPM VM, etc). So with storage on the C6100, I am trying to find the best solution. The C6100, if you don't know, has 12 3.5" drive bays on the front. No backplane for the microservers, and no internal switch (which separates this from a bladecenter). Typically, 3 drives are directly connected to each microserver. In the case of the two Hyper-V boxes, I figured I would run a 60GB SSD as the boot drive, and 2 1TB (or 2TB) 7200RPM drives in a Storage Spaces Mirror.

***I currently run 2 3TB Toshiba DT01ACA300s in a Storage Spaces Mirror on my C1100, along with a 160GB 7200RPM drive for boot, and disk I/O is pretty good with 12 VMs running.

However, I am unsure what to do with the XenServer boxes. 3 local SATA drives with no RAID controller (nor onboard RAID) per microserver, for XenServer. What can I do with that?

Each microserver has a PCIe x16 slot (with a riser), so I guess I could slap a PERC 6i in each of those, but didn't really want to.

The other question is around network...each microserver has 2 gigabit NICs, and a 10/100 management NIC. I will get a IP KVM to connect all the management NICs to, but with only two NICs per server, I'm trying to decide if I should add more in there. I have a spare 2 gigabit port Realtek card laying around I could use, but I figured with local storage, 2 NICs in a team should do fine on these 4 microservers. Thoughts?

I think these are my only questions right now, and I'm just looking for some advice.

Thanks!
post #2 of 54
Don't think local storage would be a great idea with 3 disks per server.

I would suggest you pop some infiniband or 10GbE cards (infiniband would be cheaper) into the micro servers and have them connect to your storage box using iscsi. Probably create a 4 drive RAID10 array per each server on the storage box to avoid I/O issues. Not really sure how much I/O or space you would need.
VH001
(17 items)
 
VH002
(15 items)
 
VH003
(16 items)
 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 SuperMicro X9DR7-LN4F Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS 8 x Toshiba 5TB 
Hard DriveCoolingCoolingOS
Intel 750 400GB NVME SSD 2 x Dynatron R22 6 x SuperMicro 5K RPM Fans VMWare vSphere 6 
MonitorPowerCaseOther
IPMI SuperMicro 650W PSU SuperMicro CSE-833T-650B Black 3U Rackmount Ser... Intel X520-T2 Dual Port 10GbE Server Adapter 
Other
HP P420 1GB Smart Array Controller 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 Intel S2600CP2J Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS Intel 750 400GB NVME SSD 
CoolingOSMonitorPower
2 x Intel AUPSRCBTP VMWare vSphere 6 IPMI Intel 750W PSU 
CaseOtherOther
Intel P4216XXMHGR 4U Server Chassis Intel X520-T2 Dual Port 10GbE Server Adapter HP P420 1GB Smart Array Controller 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 SuperMicro X9DR3-F Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS Intel 750 400GB NVME SSD 
CoolingCoolingOSMonitor
2 x Dynatron R22 3 x SuperMicro 5K RPM Fans VMWare vSphere 6 IPMI 
PowerCaseOtherOther
SuperMicro 563W PSU SuperMicro CSE-825TQ-563LPB 2U Rackmount Server... Intel X520-T2 Dual Port 10GbE Server Adapter HP P420 1GB Smart Array Controller 
  hide details  
Reply
VH001
(17 items)
 
VH002
(15 items)
 
VH003
(16 items)
 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 SuperMicro X9DR7-LN4F Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS 8 x Toshiba 5TB 
Hard DriveCoolingCoolingOS
Intel 750 400GB NVME SSD 2 x Dynatron R22 6 x SuperMicro 5K RPM Fans VMWare vSphere 6 
MonitorPowerCaseOther
IPMI SuperMicro 650W PSU SuperMicro CSE-833T-650B Black 3U Rackmount Ser... Intel X520-T2 Dual Port 10GbE Server Adapter 
Other
HP P420 1GB Smart Array Controller 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 Intel S2600CP2J Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS Intel 750 400GB NVME SSD 
CoolingOSMonitorPower
2 x Intel AUPSRCBTP VMWare vSphere 6 IPMI Intel 750W PSU 
CaseOtherOther
Intel P4216XXMHGR 4U Server Chassis Intel X520-T2 Dual Port 10GbE Server Adapter HP P420 1GB Smart Array Controller 
CPUCPUMotherboardGraphics
Intel Xeon E5-2670 Intel Xeon E5-2670 SuperMicro X9DR3-F Onboard Graphics 
RAMHard DriveHard DriveHard Drive
128GB (16 x 8GB) 1333MHz ECC Registered DDR3 4 x Micron M500 480GB SSD 4 x Hitachi 450GB 10K SAS Intel 750 400GB NVME SSD 
CoolingCoolingOSMonitor
2 x Dynatron R22 3 x SuperMicro 5K RPM Fans VMWare vSphere 6 IPMI 
PowerCaseOtherOther
SuperMicro 563W PSU SuperMicro CSE-825TQ-563LPB 2U Rackmount Server... Intel X520-T2 Dual Port 10GbE Server Adapter HP P420 1GB Smart Array Controller 
  hide details  
Reply
post #3 of 54
Thread Starter 
Quote:
Originally Posted by jibesh View Post

Don't think local storage would be a great idea with 3 disks per server.

I would suggest you pop some infiniband or 10GbE cards (infiniband would be cheaper) into the micro servers and have them connect to your storage box using iscsi. Probably create a 4 drive RAID10 array per each server on the storage box to avoid I/O issues. Not really sure how much I/O or space you would need.

I'm not doing iSCSI for my boot drives on my VMs. My storage server would then be a single point of failure, and with no redundant PSUs, I just don't want to risk it. 4 10GigE or 4 IB cards will not be cheap, but I would also have to have a switch that can support that, which will be out of my budget.

Yes, that is 3 disks per server, but I can say that this works fine for my C1100. My C1100 has 2 3TBs in a Storage Spaces Mirror, and is currently running 13 VMs without and I/O issues, which has me very impressed. I wish I could do 4 drives per microserver, and I would do a local RAID 10 (but I would also have to buy controllers for each, which would rack up an additional $400 or so. For the Server 2012 microservers, I will likely do a Storage Spaces Mirror for each, since it has proven itself good enough already. I'm just not sure about the two that will be running XenServer.
post #4 of 54
@tycoonbob

Here's an idea. Once you get the C6100, have a look around inside it. I know of a couple other people who use a C6100 to run a website, and they basically rerouted some of the cabling so that one microserver had access to 6 SATA drives.

Before I go further, what's the reason for running two instances of Server 2012 on the C6100, considering that they are non-clustered?
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #5 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@tycoonbob

Here's an idea. Once you get the C6100, have a look around inside it. I know of a couple other people who use a C6100 to run a website, and they basically rerouted some of the cabling so that one microserver had access to 6 SATA drives.

Before I go further, what's the reason for running two instances of Server 2012 on the C6100, considering that they are non-clustered?

Yes, that was another thought. Actually, it was to put a controller into one of the microservers, and put all 12 drives on that controller, basically making that microserver a SAN. Problem with that is it only has 2 NICs, and no more room to add in a card.

Reason for two Server 2012, is of course, for Hyper-V. Both of these Hyper-V instances would run non-clustered workloads for my "Home Production" systems, and my existing C1100 would become a Hyper-V Replica server for those two. Hence the reason for no clustering. The two XenServer microservers would be clustered primarily for XenApp and XenDesktop workloads, and my "Home Production" VMs likely would NOT reside on that cluster. This way my Citrix lab is purely a lab and I can destroy/rebuild anytime I want.
post #6 of 54
Just curious, what do you need/use all that for? redface.gif
post #7 of 54
@StayFrosty
Quote:
As some/most of you may know, I work as a consultant primarily around Microsoft technology. I drive a lot of you nuts with my "worshiping of Microsoft", even though I don't mean too. We have recently partnered with both Citrix and VMware (once again), and I will be doing a lot of studying and labing for Citrix to get 3 CCAs this year (XenServer, XenDesktop, and XenApp) while also playing with NetScaler, and XenMobile. Months later I will be testing for my CCAA. Long story short, I need new equipment for Citrix labbing.

@tycoonbob

Is 2Gbit bandwidth not enough then? Or do you simply need more interfaces?
Edited by parityboy - 4/1/13 at 5:31am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #8 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@StayFrosty
@tycoonbob

Is 2Gbit bandwidth not enough then? Or do you simply need more interfaces?

That's what I'm unsure of right now. If I added a controller to one of the microservers, and put 9 1TB drives on that controller (8 for RAID and 1 global warm spare - 3 left over drives for boot drives of the other three microservers), I would probably build out one or two RAID 10s purely for VM storage. On the XenServer cluster, I'm looking at at least 3 VMs for XenApp, along with however many I would need for XenDesktop, so I'm going to assume at least 6-8 VMs just for my Citrix environment (not looking for the bare minimum, but to play with clustering, HA, etc). That would leave me with 1 microserver for Hyper-V, which I could make do with, I'm sure. Any VMs on that Hyper-V microserver would also be on that other microserver SAN, and I can safely say it would be at least 10-12 VMs on that Hyper-V microserver. So in all, that's at least 15 VMs but more likely 20 VMs running over iSCSI on 2 GigE links. At least 6 of those VMs will be running Microsoft SQL Server, which will impact disk performance. With 7200RPM drives, I'm just not sure if that would be good enough (so I was looking at local storage for each microserver instead).

I think running one of those microservers as a SAN would be pretty cool, just not 100% if it will fit my need. I would have to ensure I got the right controller, and that the cables could even be routed like how I am thinking.

Another reason for more NICs is I would like to have at least 2 in LACP (or MPIO depending on the purpose), and at least 1 NIC on my DMZ network (which is used for things such as web hosting, DirectAccess, internet-based management for my ConfigMgr environment, etc).

Again...my home environment is not like most. It's a compact version of an enterprise.
post #9 of 54
@tycoonbob

Yeah I see your conundrum. Hmmm...here's what I think. In light of
Quote:
The two XenServer microservers would be clustered primarily for XenApp and XenDesktop workloads, and my "Home Production" VMs likely would NOT reside on that cluster. This way my Citrix lab is purely a lab and I can destroy/rebuild anytime I want.

I would
Quote:
Buy a Dell C6100 (4 microservers in a 2U chassis) and load each microserver with dual Xeon L5520s and 36GB of RAM. Two of the Microservers would running XenServer in a cluster, with XenDesktop on top of that. I would have 4 VMs on that cluster for XenApp. The other two microservers would run Server 2012 with Hyper-V, non-clustered. I would have two additional XenApp VMs there to have a cross-virtualization platform XenApp Farm.

However, I will now disagree with
Quote:
The x2 Hyper-V hosts with the AMD FX-8120s, scrap them.

In light of your concerns, I would turn at least one of these into a NUS exclusive to the Citrix lab. You could add a quad-port NIC to each of the XenServer hosts and PXE boot from a volume on the NUS. Or you could boot XenServer from a USB stick (or the local drives) and then use the NUS for your VMs via iSCSI or NFS.

To my mind this would give you everything you need, apart from having to serve VMs over iSCSI or NFS.
Edited by parityboy - 4/1/13 at 9:17am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #10 of 54
Thread Starter 
Quote:
Originally Posted by parityboy View Post

@tycoonbob

Yeah I see your conundrum. Hmmm...here's what I think. In light of
I would
However, I will now disagree with
In light of your concerns, I would turn at least one of these into a NUS exclusive to the Citrix lab. You could add a quad-port NIC to each of the XenServer hosts and PXE boot from a volume on the NUS. Or you could boot XenServer from a USB stick (or the local drives) and then use the NUS for your VMs via iSCSI or NFS.

To my mind this would give you everything you need, apart from having to serve VMs over iSCSI or NFS.

Well I am planning to run some VMs from my existing NUS over iSCSI, for the Citrix stuff. I will have a ~1TB iSCSI Target just for Citrix, so no need to build a new storage box. I haven't thought about running the OS off of a flash drive, so that's something I will consider. I have been thinking about adding at least 2 NICs to each Citrix server, for a total of 4 (which should be plenty for my needs -- but will get a quad port card if I can find them cheap enough).

I know there are 20 different ways to work this lab out, but I just can't figure out which one serves my needs best, without spending a lot of money. I guess I need to just buy the C6100 and see what I can actually do with it first.

Thanks again!
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Home Infrastructure ReDesign