Overclock.net banner
1 - 20 of 120 Posts

tycoonbob

· Registered
Joined
·
2,173 Posts
Discussion starter · #1 ·
Yeah, it's been on my mind a lot lately.

My current setup exists of a Dell R610 running Server 2012R2 with Hyper-V, and my custom storage box. Out of my ~16 VM's, I'd say 13 of them are Linux-based (CentOS), while 4 are Windows based. The 4 Windows based VM's consist of 2 Windows Server 2012R2 domain controllers, and 1 Windows Server 2012R2 VM running a Torrent client (could easily be moved to Windows). The last Windows VM is a Windows 8.1 VM I use as a backup if I can't get to my Windows 8.1 physical PC when away from home. Aside from those Windows VM's, I also have 2 PC's running Windows 8.1 (mine and the lady's), while her's isn't even on my domain and my storage server running Windows Server 2012. So yeah, I find myself slowly moving to more and more Linux stuff, and I am seeing myself get smarter and more efficient with my resources.

OpenStack is very appealing to me because it's more than just a hypervisor (which is all I have now). It's full management with a web based front-end, provide templates to easily spin up VM's, manage my virtual network, and even manage storage (file level with OS Swift, and block level with OS Cinder). OpenStack can utilize about any hypervisor, so that leaves me pretty open. While I love Hyper-V, the latest OpenStack (Icehouse) doesn't support Hyper-V yet. Other options are Xen-based (XenServer, XCP, Xen), KVM/QEMU, and ESX. I hate Xen, so that's out of the question. I could use VMware, but I'm really leaning toward KVM/QEMU (since KVM can run Windows Server 2012R2/Win8.1 VMs) and is pretty lightweight. With OpenStack and KVM I'll have the features I need (templates, live migration/failover, can use iSCSI or NFS storage) plus other features that I would probably like. My problem with figuring out if I want to make this big change is hardware. If I go to OpenStack, I want to get more lower powered compute nodes and scale out, instead of up.

At one time I had 2 C1100s, each with dual L5520s and 48GB of RAM. It was wasteful, because I didn't use but ~5% of the CPUs and maybe 40% of the combined RAM. What I'd like to do is build up to 5 boxes (2 to start), each with like a single Xeon L5639/5640, or even a single Xeon L5520 and 16-32GB of RAM each. However, I'd also like dual onbaord NIC's along with a PCIe slot to add more NICs. Dual PSU's would be nice, but with multiple nodes and failover, I'm not too concerned about that. Also, I'd like to see a power draw of like 50W from each, compared to the 140W of my R610.

I would retain my current storage box, and likely find a way to share out storage as both Cinder and Swift, to see if block level (iSCSI -- aka Cinder) or file level (NFS/CIFS -- aka Swift) is better for my setup.

I'm looking for ideas for my computer nodes though, and looking to spend somewhere like $300-400 per node. I can get used Xeon L5640's on ebay for about $80/ea, but I can't seem to find much in the ways of single CPU LGA1366 motherboards/server barebones. I'm wanting small servers, such as 1U half-rack. I've been looking at dell R210's, and can get a barebone R210 with Rails for $125-150 each. Since I want L-series Xeons (and not E-Series or X-Series), I would likely want Xeon L3426's to go in these R210's, but they run about $275. That's $400 without RAM and drives (I expect a 60GB SSD, for about $50). RAM is basically $10 per GB, so 16GB of RAM would be about $160. That would be ~$600 per server, which seems like a waste when I could get a C1100 with dual L5520's and 24GB of RAM for ~$400.

Does anyone know of a used OEM server that will fit what I'm looking for, or a way I could build one? Or am I just dreaming that I could find this sort of setup? I've looked at some Dell's and HP's, and haven't really found what I'm looking for, but haven't quite looked much at IBM, SuperMicro, Rackables, and whatever else might be out there.
 
Discussion starter · #2 ·
Well, I've fallen in love with the new Intel Atom CPUs. Specifically, the 8 core/8 thread C2750.

This thing is great (albiet, more pricey than I want, but may be worth it):
SuperMicro SYS-5018A-FTN4 - $530 (Newegg)
-1U half rack chassis
-Intel Atom C2758 @ 2.4GHz (8 core, 8 thread -- supports VT-x and supposedly VT-d) TDP of 20W!!!
-4 x 204pin SO-DIMM slots, supporting up to 32GB of ECC RAM
-QUAD Gigabit NIC's
-Dedicated IPMI NIC
-200W PSU

Basically buy this and RAM. Would put me at about $650 per system if I added 16GB of RAM, but the power draw on these is somewhere around 30-40W. According to some benchmarks here, the C2750 is supposed to be more powerful than a single Xeon L5520 (or at least on par), and should do just fine for running 8-12 VM's.

SuperMicro also has another version of this box, that has the Atom C2550 (4 core / 4 thread) at 2.4GHz, with a TDP of only 14W. This model supports 64GB of ECC RAM, and costs about $400.
http://www.newegg.com/Product/Product.aspx?Item=N82E16816101874

I can get the motherboard/CPU combo that's in the first server for about $338 from Newegg:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182855

And can pick up a SuperMicro chassis (with 200W PSU) for about $75-95 on NewEgg:
http://www.newegg.com/Product/Product.aspx?Item=9SIA24G1E38394
http://www.newegg.com/Product/Product.aspx?Item=N82E16811152131

That would be about $425, plus the cost of RAM and a 60GB SSD.

While it's higher than what I was looking for, it gives me pretty much everything I want and more. 4 onboard NICs, PCI-e slot, super lower power (lower than anything else I will find) which could be very useful if I grew my OpenStack environment to 4-6 of these for Computer nodes, dedicated IPMI NIC...pretty cool.

To go even cheaper, the SuperMicro A1SRi-2558 (C2558, quad core Atom) motherboard can be had for about $235 on eBay right now:
http://www.ebay.com/itm/FREE-SHIP-Supermicro-A1SRI-2558F-B-Intel-Atom-C2558-DDR3-SATA3USB3-0-V-4GbE-/201129179564?pt=LH_DefaultDomain_0&hash=item2ed43bb9ac

I think something like this would be good with 16GB of RAM as compute nodes. Obviously the 8 core (C27xx series Atom) with 32GB of RAM would be more powerful, but there could be a cost savings going with the lower powered model and more of them.
 
I'm very curious to see what you decide on. Sub
smile.gif
 
If you really got 6 nodes that drew 50w each wouldn't that be way higher of a draw then your single R610,I don't see how that would help with power usuage at all. The atom is better from a power perspective. Don't have much to add, mostly wanted to sub. So openstack is essentially a web frontend for hypervisors? What's making you move away from just windows now?
 
Discussion starter · #5 ·
Quote:
Originally Posted by cones View Post

If you really got 6 nodes that drew 50w each wouldn't that be way higher of a draw then your single R610,I don't see how that would help with power usuage at all. The atom is better from a power perspective. Don't have much to add, mostly wanted to sub. So openstack is essentially a web frontend for hypervisors? What's making you move away from just windows now?
From Wikipdia:
[...]
OpenStack is a free and open-source software cloud computing platform. It is primarily deployed as an infrastructure as a service (IaaS) solution. The technology consists of a series of interrelated projects that control pools of processing, storage, and networking resources throughout a data center, able to be managed or provisioned through a web-based dashboard, command-line tools, or a RESTful API. It is released under the terms of the Apache License. [...]

While 6 nodes at 50W each is more than a single R610 at ~150W, my original plan was to grow to 3 R610's to allow for redundancy/clustering. That would obviously be 450W, where as 6 Atom nodes is 300W, but it would be a while before I got to 6 nodes (if ever). I plan to start with 2, and see myself growing to 3-4 total Compute nodes.
 
Discussion starter · #7 ·
Quote:
Originally Posted by cones View Post

Wasn't aware you wanted to grow with the R610s. That wikipedia quote helped I just hadn't looked it up myself yet.
No worries. It's easier for me to copy and paste instead of trying to explain it myself, lol.

If my current R610 goes down, my VMs are offline until the R610 is fixed. I don't like that. I consider my home systems to be "production", as they run various things such as my media database (XBMC centralized database), PVR software, Wireless Access Point controller, DNS, soon to be home automation, etc. If my current server goes down, my network is essentially down.
 
Quote:
Originally Posted by tycoonbob View Post

No worries. It's easier for me to copy and paste instead of trying to explain it myself, lol.

If my current R610 goes down, my VMs are offline until the R610 is fixed. I don't like that. I consider my home systems to be "production", as they run various things such as my media database (XBMC centralized database), PVR software, Wireless Access Point controller, DNS, soon to be home automation, etc. If my current server goes down, my network is essentially down.
This explains a lot then. I can understand the need for redundancy now. I guess I'm just so used to the consumer grade "plug-n-play" equipent that I don't really think about the back end. The only thing I really needed to do when I switched over to pfSense was configure my ports for my minecraft server...

I guess I will learn more about DNS and the other stuff as I continue to venture into networking.
smile.gif
 
I think most of us could learn more about DNS. I'm still trying to figure out myself how to have an address go to an internal IP. Like above that makes sense why you want redundancy since it sounds like you can't do anything when that server is down.

Would you be able to use NUCs to accomplish this, they may now have hardware pass through?
 
Discussion starter · #10 ·
Quote:
Originally Posted by cones View Post

I think most of us could learn more about DNS. I'm still trying to figure out myself how to have an address go to an internal IP. Like above that makes sense why you want redundancy since it sounds like you can't do anything when that server is down.

Would you be able to use NUCs to accomplish this, they may now have hardware pass through?
I looked at the NUC's, but there is a nice premium to these because they are tiny and mainstream. They aren't rack mountable either, so that's a negative in my book. Oh, and only a single NIC.

The more I think about it, the more the Atom builds sound really appealing. With a name like Atom, part of me wants to think low performance, but that doesn't appear to be the case at all here. These specific Atom models I'm looking at are actually server grade, and I bet well see more of these style builds in datacenters (especially things like web hosting, and public clouds).

To have an address you go to an internal address, you need to have your own internal DNS server(s) and have an A Record created for that. If you configure your computer to talk to your DNS server, it will look there first for a name resolution. If it doesn't find a record, it will then look for any configured Forwarders (i.e., forward to Google's DNS -- 8.8.8.8, 8.8.4.4) or it will look at the root hints. I don't use any forwarders personally, and my setup seems great as long as my Domain Controllers stay up (I have 2 VM's, each running Active Directory Domain Services, Microsoft DNS, and Microsoft DHCP).
 
Discussion starter · #13 ·
Quote:
Originally Posted by TheBloodEagle View Post

I'm really quite new to all this, lot of it is going over my head but would this be a great way to create a render farm?
Honestly, I have no idea. Depending on how you set up a render farm, I guess you could do it in OpenStack, assuming you have multiple virtual nodes that share the same job. Honestly, I would think it would be better to use the physical hardware has render nodes instead of virtual nodes on top of a physical machine. I could be completely wrong though.
 
Discussion starter · #14 ·
Well, just an update.

I've installed CentOS 6.5 in a VM I've named "CONTROLLER01" and now have Keystone (Identity Services) Glance (Image Service), and Horizon (Web Frontend) installed for Openstack. I've also got CentOS 6.5 installed on that DE5100, named "COMPUTE01", and configured Nova (Compute -- hypervisor). Took me maybe 30 minutes to get this far with using the OpenStack documentation, but wasn't bad.

Oddly, the web frontend sees COMPUTE01 as having 3GB of RAM, instead of 4 like the computer shows. Not a huge deal at this time, but I'm liking this web frontend so far. Maybe Compute nodes allocate 1GB for themselves or something, which is no big deal when I'm considering 32GB Compute nodes.

I'll get nova-network configured sometime this weekend, but likely won't play with any of the other modules just yet. There is so much going on that I'd rather focus on the primary modules first, lol. If all goes well, I definitely see myself selling my R610 and buying/build 2-3 of those 8-core Atom C2758 servers. I really think 3 would be the most I'd need (8 core, 24 or 32GB of RAM each), and would consume less power than my current R610 while providing more power. (The Atom C2758 is on-par to the Xeon L5520, better in some benches, and not as good in others -- I figure 3 C2758 should provide more CPU power than 2 L5520's, and consume much less power).

I've got one test image running CirrOS, just to see what it's like.



 
Discussion starter · #16 ·
Yeah, it's been awhile.

I've decided to move forward with the Avaton Atom-based servers, but money is the limiting factor right now.

I've got one VM built as a controller, and am using that AOpen PC as a KVM node. I've probably reimaged everything 3 times now, just learning the difference pieces and the best way to install everything cleanly before I get new hardware.

It'll happen, but will probably be a bit longer...unfortunately.
 
Im keen to know more about this 8 core Atom you speak of! I need to replace the pfsense box we currently have in the office and something as low power as possible would be perfect.

I have thought about using Openstack in a production environment i look after as a replacement for VMWare. However it sort of looks like Openstack is best used with a distributed storage system like Ceph as opposed to a traditional iSCSI SAN like we currently use. Also the virtualised networking is an awesome feature but I need to get used to that since the environment in question would need to be modified and have our public address range presented to the openstack servers directly rather than NAT'ed behind a pfSense cluster.

With regards to choosing a hypervisor, I would suggest KVM. I have been trying it out and it seems just as efficient as ESXi or HyperV for that matter.

Infact, you might want to go take a look at Proxmox!
 
Discussion starter · #18 ·
Quote:
Originally Posted by The_Rocker View Post

Im keen to know more about this 8 core Atom you speak of! I need to replace the pfsense box we currently have in the office and something as low power as possible would be perfect.

I have thought about using Openstack in a production environment i look after as a replacement for VMWare. However it sort of looks like Openstack is best used with a distributed storage system like Ceph as opposed to a traditional iSCSI SAN like we currently use. Also the virtualised networking is an awesome feature but I need to get used to that since the environment in question would need to be modified and have our public address range presented to the openstack servers directly rather than NAT'ed behind a pfSense cluster.

With regards to choosing a hypervisor, I would suggest KVM. I have been trying it out and it seems just as efficient as ESXi or HyperV for that matter.

Infact, you might want to go take a look at Proxmox!
Thanks for the feedback. If I do decide to move my home network to OpenStack (which I really want to, if I can allocate the funds to do it), I would be using either Hyper-V or KVM for my nodes, as they are both supported by OpenStack. I know Hyper-V inside and out, and I have licenses for it. I also know ESXi, but I don't want to use the free version and I have no way to get personal licenses. KVM is basically the open source leader (when ignoring Xen...which I hate with a passion).

The Atoms look pretty interesting, and the benchmarks show them being better (in some benches, and as good in others) than the Xeon L5520 (which is why my current Hyper-V box is based on). From my research, I can have comparable resources, while consuming less power...which is great if I want to get 3 or so virtualization nodes (which was always my original plan). 3 of these Avaton Atom's just sound so much better than 3 R610s, at least in a home environment.

Also, you can use a traditional SAN (iSCSI or FC) with OpenStack. OpenStack has 2 storage modules, Cinder and Swift. Cinder is a block level provider (iSCSI/FC), while Swift is a file level provider (think NFS). From my understanding, you can use whatever backend storage you want and serve it block level (with Cinder) or file level (with Swift). I plan on using Cinder if I go with Hyper-V, but I may use Switch (or Cinder and Swift) if I choose KVM.

I still have plenty of research to do, but need to get my hands on one of those Avaton Atom servers first (~$700, or so).
frown.gif
 
Discussion starter · #19 ·
Thought I'd share this:



So far it seems like SuperMicro is the primary market player in these things, at least in combo motherboards and barebones. I'm trying to decide the best path for me, and it seems that building one from a combo motherboard and SM chassis will save 10-12%, or so. Only problem is that I'm not sure if the SM 502L-200B chassis (~$75) will work, as it seems the I/O ports are off (based on images from NewEgg). I've read some things elsewhere that this chassis should work, so I've sent an email off to SM.

I sold a few tech toys locally today, so I have some money freed up to purchase an Avoton build to play with, without decommissioning my current R610, and it's workloads. I have a good lead on the SYS-5018A-MHN4 barebone server (C2759 CPU, 4 x 240Pin DIMM, 4 NICs), unless the A1SRM-2758F-) motherboard fits in the 502L-200B chassis, in which case I'll be going with that. I'll be spending ~$400-450 for motherboard/cpu/chassis/psu, and will likely pickup 2 16GB ECC DIMMs from eBay (can be had for ~$125-140/ea). Should do me great, and I can't wait to hear back from SM so I can make a purchase.

I plan to start out with Server 2012 R2 and Hyper-V, just to get a feel for the hardware and see what performance is like. From there I will probably move to CentoS 6.5 or 7, and build an OpenStack node with local storage (512GB SSD that's in my R610), using that AOpen miniPC as the controller.

I should be able to sell my R610 locally for $750-800, which will recoup my money from this purchase, plus $50-100.
smile.gif


Can't wait!
 
Discussion starter · #20 ·
And just when I think I am on to something, I'm having a hard time justifying the additional cost of the Avoton based builds.

A1SRM-2758F-O motherboard/CPU combo --- $337.79
Chassis --- $100.00
32GB RAM --- $250.00
Total --- $687.79

Dell R610 (dual L5520, 24GB RAM) --- $330.00

Cost difference --- $357.79

I could buy 2 more R610s for less than the cost of 1 of these Avoton builds!

So the whole point of looking at these Avoton builds is for electricity savings, so I decided to actually calculate that out. Where I live, I pay 7.952 cents per kWh (KiloWatt Hour).
My current R610 pulls about 110W (according to the front LCD panel of the server).
110 X 24 / 1000 = 2.64kWh = 20.99328 cents per day

Electricity cost to run 1 Dell R610 in my current configuration:
$0.2099328 / day
$6.297984 / month

I don't have any definitive facts on the Avoton C2758 power draw, but seeing how it's a 20W TDP CPU with a 200W PSU, I will guess 40W power draw.
40 x 24 / 1000 = 0.96kWh = 7.63392 cents per day

Electricity cost to run 1 Avoton C2758 build:
$0.0763392 / day
$2.290176 / month

So in one month, I would save ~$4.008 by using the Avoton build versus my R610. At a cost difference of $357.79 (Avoton build vs R610), it would take 89 months (or 7.5 years) before I could say that I've saved money.

(Aiming extrememly low) If the Avoton C2758 build only consumed 20W:
20 x 24 / 1000 = 0.48kWh = 3.81696 cents per day

Electricity cost to run 1 Avoton C2758 build at 20W:
$0.0381696 / day
$1.145088 / month

In one month I'd save $5.152896 versus my R610. That would take 69 months (or 5.75 years) before I could say that I've saved money by the Avoton build.

Now these numbers only apply if I was to run 1 of the debated servers. Because all of the cost increment the same, the cost savings will be higher but I'd have a higher initial investment to pay off. For example...

Let's say I run 3 of each (either 3 R610s, or 3 Avoton builds):

R610
110 X 24 / 1000 = 2.64kWh = 20.99328 cents per day per server, x 3 = 62.97984 cents per day for 3 servers
$0.6297984 / day
$18.893952 / month

C2758 @ 40W
40 x 24 / 1000 = 0.96kWh = 7.63392 cents per day per server, x3 = 22.90176 cents per day for 3 servers
$0.2290176 / day
$6.870528 / month

In one month I'd save $12.023424 using 3 Avoton C2758 builds versus using 3 R610s. $357.79 is the cost difference per Avoton server versus the R610, so that's $1,073.37 more invested for all 3. At a savings of $12.023424/month, that would still take 89 months (7.5 years) to break even.

I suspect I will replace this hardware before 7 years, so the cost savings is just not there for me anymore for the Avoton builds. If I could get a C2758 with 24GB of RAM for $300, I'd definitely go for them. But at the current costs, no thanks.

On another note, I'm currently in negotiations to purchase 2 Dell R610s (dual Xeon L5520s, 24GB RAM, SAS 6ir, etc) just like the one I already have. I'm hoping I can get them both shipped for ~$650. If I can, I will be selling one to a friend locally, but the other will likely be used as an OpenStack Compute node so I can really start some testing! I'm excited that things are possibly finally moving on this project.
 
1 - 20 of 120 Posts