Yeah, it's been on my mind a lot lately.
My current setup exists of a Dell R610 running Server 2012R2 with Hyper-V, and my custom storage box. Out of my ~16 VM's, I'd say 13 of them are Linux-based (CentOS), while 4 are Windows based. The 4 Windows based VM's consist of 2 Windows Server 2012R2 domain controllers, and 1 Windows Server 2012R2 VM running a Torrent client (could easily be moved to Windows). The last Windows VM is a Windows 8.1 VM I use as a backup if I can't get to my Windows 8.1 physical PC when away from home. Aside from those Windows VM's, I also have 2 PC's running Windows 8.1 (mine and the lady's), while her's isn't even on my domain and my storage server running Windows Server 2012. So yeah, I find myself slowly moving to more and more Linux stuff, and I am seeing myself get smarter and more efficient with my resources.
OpenStack is very appealing to me because it's more than just a hypervisor (which is all I have now). It's full management with a web based front-end, provide templates to easily spin up VM's, manage my virtual network, and even manage storage (file level with OS Swift, and block level with OS Cinder). OpenStack can utilize about any hypervisor, so that leaves me pretty open. While I love Hyper-V, the latest OpenStack (Icehouse) doesn't support Hyper-V yet. Other options are Xen-based (XenServer, XCP, Xen), KVM/QEMU, and ESX. I hate Xen, so that's out of the question. I could use VMware, but I'm really leaning toward KVM/QEMU (since KVM can run Windows Server 2012R2/Win8.1 VMs) and is pretty lightweight. With OpenStack and KVM I'll have the features I need (templates, live migration/failover, can use iSCSI or NFS storage) plus other features that I would probably like. My problem with figuring out if I want to make this big change is hardware. If I go to OpenStack, I want to get more lower powered compute nodes and scale out, instead of up.
At one time I had 2 C1100s, each with dual L5520s and 48GB of RAM. It was wasteful, because I didn't use but ~5% of the CPUs and maybe 40% of the combined RAM. What I'd like to do is build up to 5 boxes (2 to start), each with like a single Xeon L5639/5640, or even a single Xeon L5520 and 16-32GB of RAM each. However, I'd also like dual onbaord NIC's along with a PCIe slot to add more NICs. Dual PSU's would be nice, but with multiple nodes and failover, I'm not too concerned about that. Also, I'd like to see a power draw of like 50W from each, compared to the 140W of my R610.
I would retain my current storage box, and likely find a way to share out storage as both Cinder and Swift, to see if block level (iSCSI -- aka Cinder) or file level (NFS/CIFS -- aka Swift) is better for my setup.
I'm looking for ideas for my computer nodes though, and looking to spend somewhere like $300-400 per node. I can get used Xeon L5640's on ebay for about $80/ea, but I can't seem to find much in the ways of single CPU LGA1366 motherboards/server barebones. I'm wanting small servers, such as 1U half-rack. I've been looking at dell R210's, and can get a barebone R210 with Rails for $125-150 each. Since I want L-series Xeons (and not E-Series or X-Series), I would likely want Xeon L3426's to go in these R210's, but they run about $275. That's $400 without RAM and drives (I expect a 60GB SSD, for about $50). RAM is basically $10 per GB, so 16GB of RAM would be about $160. That would be ~$600 per server, which seems like a waste when I could get a C1100 with dual L5520's and 24GB of RAM for ~$400.
Does anyone know of a used OEM server that will fit what I'm looking for, or a way I could build one? Or am I just dreaming that I could find this sort of setup? I've looked at some Dell's and HP's, and haven't really found what I'm looking for, but haven't quite looked much at IBM, SuperMicro, Rackables, and whatever else might be out there.
My current setup exists of a Dell R610 running Server 2012R2 with Hyper-V, and my custom storage box. Out of my ~16 VM's, I'd say 13 of them are Linux-based (CentOS), while 4 are Windows based. The 4 Windows based VM's consist of 2 Windows Server 2012R2 domain controllers, and 1 Windows Server 2012R2 VM running a Torrent client (could easily be moved to Windows). The last Windows VM is a Windows 8.1 VM I use as a backup if I can't get to my Windows 8.1 physical PC when away from home. Aside from those Windows VM's, I also have 2 PC's running Windows 8.1 (mine and the lady's), while her's isn't even on my domain and my storage server running Windows Server 2012. So yeah, I find myself slowly moving to more and more Linux stuff, and I am seeing myself get smarter and more efficient with my resources.
OpenStack is very appealing to me because it's more than just a hypervisor (which is all I have now). It's full management with a web based front-end, provide templates to easily spin up VM's, manage my virtual network, and even manage storage (file level with OS Swift, and block level with OS Cinder). OpenStack can utilize about any hypervisor, so that leaves me pretty open. While I love Hyper-V, the latest OpenStack (Icehouse) doesn't support Hyper-V yet. Other options are Xen-based (XenServer, XCP, Xen), KVM/QEMU, and ESX. I hate Xen, so that's out of the question. I could use VMware, but I'm really leaning toward KVM/QEMU (since KVM can run Windows Server 2012R2/Win8.1 VMs) and is pretty lightweight. With OpenStack and KVM I'll have the features I need (templates, live migration/failover, can use iSCSI or NFS storage) plus other features that I would probably like. My problem with figuring out if I want to make this big change is hardware. If I go to OpenStack, I want to get more lower powered compute nodes and scale out, instead of up.
At one time I had 2 C1100s, each with dual L5520s and 48GB of RAM. It was wasteful, because I didn't use but ~5% of the CPUs and maybe 40% of the combined RAM. What I'd like to do is build up to 5 boxes (2 to start), each with like a single Xeon L5639/5640, or even a single Xeon L5520 and 16-32GB of RAM each. However, I'd also like dual onbaord NIC's along with a PCIe slot to add more NICs. Dual PSU's would be nice, but with multiple nodes and failover, I'm not too concerned about that. Also, I'd like to see a power draw of like 50W from each, compared to the 140W of my R610.
I would retain my current storage box, and likely find a way to share out storage as both Cinder and Swift, to see if block level (iSCSI -- aka Cinder) or file level (NFS/CIFS -- aka Swift) is better for my setup.
I'm looking for ideas for my computer nodes though, and looking to spend somewhere like $300-400 per node. I can get used Xeon L5640's on ebay for about $80/ea, but I can't seem to find much in the ways of single CPU LGA1366 motherboards/server barebones. I'm wanting small servers, such as 1U half-rack. I've been looking at dell R210's, and can get a barebone R210 with Rails for $125-150 each. Since I want L-series Xeons (and not E-Series or X-Series), I would likely want Xeon L3426's to go in these R210's, but they run about $275. That's $400 without RAM and drives (I expect a 60GB SSD, for about $50). RAM is basically $10 per GB, so 16GB of RAM would be about $160. That would be ~$600 per server, which seems like a waste when I could get a C1100 with dual L5520's and 24GB of RAM for ~$400.
Does anyone know of a used OEM server that will fit what I'm looking for, or a way I could build one? Or am I just dreaming that I could find this sort of setup? I've looked at some Dell's and HP's, and haven't really found what I'm looking for, but haven't quite looked much at IBM, SuperMicro, Rackables, and whatever else might be out there.