Overclock.net › Forums › Specialty Builds › Servers › Home Super Server Project and work log
New Posts  All Forums:Forum Nav:

Home Super Server Project and work log

post #1 of 22
Thread Starter 
I have been reading at this forum for many years now. I have learned much of what I know about overclocking and cooling at this very site. So I thought it was about time I contributed myself.

I will be building a home server in order to facilitate home schooling (for four kids) and for automatically managing a solar array and battery backup system. In addition this thing will be expected to monitor energy usage and power on generators when needed. Furthermore I will be asking this thing to provide enough power to 5 simultaneous users to allow for a full party in WoW. Lastly I need simple-raw-POWER, as I would like all users to manage at least a high preset even with all 5 users on at once!

If this sounds like a challenge feel free to step in with any suggestions.

I have researched and it looks like a Linux OS with some bells and whistles will do the trick, although it will likely require some modification to the kernel.

I have attempted to do this before, but the money wasn't there and the project died. This time I am working with a budget of about 15k USD. I have all my ducks in a row financially for this project, but I am in NO hurry. I want to do this right.



So this project will have two parts.

1. Hardware selection and build.

2. Os selection and config.

I have done a good bit of reading lately and its looking like a dual or possibly even quad socket board will be necessary.
I am looking at a quad socket configuration running AMD G34 Chips. This would be much cheaper than any dual socket Xeon board, but if I need to pay more I will. If this machine cant perform then the money I save is still just wasted. That being said I looking for more information on performance of quad AMDs vs dual Xeons.

Long story short, I need a monster and money, while certainly an issue, is second to performance here.

That's all for now. BTW be gentle with the comments lol, I haven't done anything near this scale, but I feel up to the job and I am very excited to learn!
Edited by NonOtherThenI - 3/31/14 at 9:15am
post #2 of 22
Subbed biggrin.gif
Alienware 15
(11 items)
 
  
CPUGraphicsRAMHard Drive
i7-4710HQ GTX 970M 3GB 16GB 256GB M.2 SSD 
Hard DriveHard DriveOSOS
250GB m.2 SSD 1TB HDD Fedora 23 Windows 10 
MonitorKeyboardMouse
4k IPS Corsair K95 Corsair K65 
  hide details  
Reply
Alienware 15
(11 items)
 
  
CPUGraphicsRAMHard Drive
i7-4710HQ GTX 970M 3GB 16GB 256GB M.2 SSD 
Hard DriveHard DriveOSOS
250GB m.2 SSD 1TB HDD Fedora 23 Windows 10 
MonitorKeyboardMouse
4k IPS Corsair K95 Corsair K65 
  hide details  
Reply
post #3 of 22
Interesting project, i would think it'd be best to use separate computers for WOW but using one sounds more fun to do biggrin.gif Does it run natively on Linux?
My System
(21 items)
 
Server/HTPC
(11 items)
 
 
CPUMotherboardGraphicsRAM
AMD 8320 Asus m5a99fx pro EVGA 660ti  Gskill 8GB F3-1600C9-8GXM x2 
RAMHard DriveHard DriveHard Drive
4GB x2 OCZ Agility 3 Sasmsung 840 EVO Western Digital Caviar Blue 
Hard DriveOptical DriveOptical DriveCooling
Seagate 500gb Asus DRW-24B1ST Asus BC-12B1ST cool master hyper 212 evo 
OSOSMonitorMonitor
Windows 10 Pro x64 Arch  Asus 23" VH238 Asus 23" VH238H 
PowerCase
Corsair CX600M Fractal Design Define R5  
CPUMotherboardRAMHard Drive
i3 6100 Asus Z170M-Plus something 16gb DDR4 Western Digital 500GB 
Hard DriveHard DriveHard DriveOS
Samsung 2TB Western Digital Red 3TB HGST Deskter 4TB Unraid 6.x 
OSPowerCase
Ubuntu Server - VM Corsair CX430  Cooler Master HAF 912 
  hide details  
Reply
My System
(21 items)
 
Server/HTPC
(11 items)
 
 
CPUMotherboardGraphicsRAM
AMD 8320 Asus m5a99fx pro EVGA 660ti  Gskill 8GB F3-1600C9-8GXM x2 
RAMHard DriveHard DriveHard Drive
4GB x2 OCZ Agility 3 Sasmsung 840 EVO Western Digital Caviar Blue 
Hard DriveOptical DriveOptical DriveCooling
Seagate 500gb Asus DRW-24B1ST Asus BC-12B1ST cool master hyper 212 evo 
OSOSMonitorMonitor
Windows 10 Pro x64 Arch  Asus 23" VH238 Asus 23" VH238H 
PowerCase
Corsair CX600M Fractal Design Define R5  
CPUMotherboardRAMHard Drive
i3 6100 Asus Z170M-Plus something 16gb DDR4 Western Digital 500GB 
Hard DriveHard DriveHard DriveOS
Samsung 2TB Western Digital Red 3TB HGST Deskter 4TB Unraid 6.x 
OSPowerCase
Ubuntu Server - VM Corsair CX430  Cooler Master HAF 912 
  hide details  
Reply
post #4 of 22
Cisco UCS with 1TB of RAM should do it.
    
CPUMotherboardGraphicsRAM
Intel Overdrive 486DX4 100Mhz Digital Venturis 466 S3 Trio 32 1MB 68MB 72 Pin SIMMs 
Hard DriveOptical DriveOSMonitor
1.2Gb & 270Mb 32X CD-ROM Windows 98 SE LG 23" Flatron 
KeyboardPowerCaseMouse
Microsoft PS/2 Lite-On Digital Venturis 466 Logitech PS/2 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel Overdrive 486DX4 100Mhz Digital Venturis 466 S3 Trio 32 1MB 68MB 72 Pin SIMMs 
Hard DriveOptical DriveOSMonitor
1.2Gb & 270Mb 32X CD-ROM Windows 98 SE LG 23" Flatron 
KeyboardPowerCaseMouse
Microsoft PS/2 Lite-On Digital Venturis 466 Logitech PS/2 
  hide details  
Reply
post #5 of 22
Sounds like a fun project.

This article might help you out with deciding between Intel and AMD.

http://www.cio.com/article/728094/How_to_Pick_a_CPU_When_Buying_Servers
post #6 of 22
For the WoW stuff, I'm assuming you are referring to hosting the server and have 5 separate PCs connect to your private server. If so, WoW server runs fine from Linux, in fact it's probably better than Windows.

Now if you want the server to do your GPU work as well for your WoW clients, you will need something like RemoteFX or the comparable from Citrix or VMware. Using Hyper-V on a physical server with a pair of serious GPUs would allow you to play WoW via RDP on some thin clients, allowing you to put more money into your server instead of your clients. What you will find, though, is that licensing costs of doing this are going to blow that 15K budget out of the water.

Without knowing what software you plan to use to manage your solar array and energy monitoring, I can't specifically speak to to that. However, I seriously doubt much power is needed for that. Heck, having a really nice meteorology station at home and logging to a MySQL database and running a custom PHP-based analytic website using rrdgraph would only take 1GB of RAM or less ,and minimal CPU cycles. Point is, what you're looking to do won't actually require THAT much raw CPU/RAM.

I also have no idea what you mean by:
Quote:
Originally Posted by NonOtherThenI View Post

Lastly I need simple-raw-POWER, as I would like all users to manage at least a high preset even with all 5 users on at once!

I definitely think you should use virtualization, instead of slapping this all on one physical server. Separation of roles will allow you to perform maintenance without interrupting other services. If you plan to use this server for your kids education, I would consider it quasi-mission critical. With that in mind, I think you will need more than one server and you should also focus on other considerations like redundant power such as redundant PSUs and UPSs for your hardware (I see you mention generators, but I'd still HIGHLY recommend UPS on your servers to help condition the quality of the electricity and ensure no brown-outs while the gennies are starting, and to also prevent surges), redundant network (two switches will likely suffice), and redundant storage.

Hardware-wise, definitely doable with $15k. Software costs is what will get you, unless you stick with open source.

Regardless of the workload I was planning to do, I always virtualize what I can. There are instances where virtualization isn't a good idea, but I don't see why you couldn't for your needs. Grab 3 used OEM servers from eBay (Dell C1100, HP DL160 G6, Dell R610, etc) -- something with dual Xeon L5500+ CPU (such as L5520, L5639, etc) and 48-72GB of RAM. I know it's easy to load up on hardware, but realistically you won't need 3 servers with 72GB of RAM for your workload. You will definitely hit storage I/O issues before using 200GB+ of RAM. Expect to spend about $500 per server, so that's $1,500 for your main hardware.

Get some used switches, like Dell PowerConnect 5448, which are about $150-200 each. Get two. Add a quad gigabit PICe card to each server, giving you a total of 6-8 gigabit NICs per server. Assuming you went with Dell R610 (my personal choice now, instead of the C1100), you would have 8 NICs. Use 2 NICs in LACP, and 6 NICs in MPIO on a storage network. 2 ports per switch should LACP to each other, while splitting the other NICs between the switches. For example, the 2 NICs in LACP from each server, one NIC should go to each switch and be on your LAN VLAN. The remaining 6 NICs should have 3 NICs on each switch, on your Storage VLAN.

For storage, I say get something like a Dell C2100. 12 3.5" drive bays, and typically come with x2 Xeon L5630, 24GB of RAM, and a Dell PERC H700 for around $700. Grab 4 512GB SSDs (~$350 each), and 8 3TB drives (Toshiba DT01ACA300 would be my choice -- $100 each) and have two separate RAID10 arrays. You would have ~1TB of super-fast, redundant SSD storage for your VMs (more than enough for VMs), and ~12TB of pretty fast storage on spindle (~650 IOPS). Use this for file storage, and even for VMs that don't require high I/O, even though you will have plenty of SSD storage. Add two quad gigabit NIC card to the C2100, and set up 4 NICs in LACP for your LAN VLAN (used for management and file traffic), and use the remaining 6 for your storage VLAN. Again, split these NICs across cards and switches, for redundancy. This is a full, complete storage system for around $3000. May seem like a lot, but this includes 1TB of redundant SSD storage which is more than you should need.

For your gateway, I'd recommend a Ubiquiti EdgeRouter POE, which will give you 5 ports. 1 for your WAN, 2 ports in LACP for your LAN (1 NIC to each switch), optionally leaving 2 ports which could be used for a Wireless LAN and/or DMZ networks (think guest/visitor access who you don't want on your LAN, but you want to share your internet with). These are about $200. Maybe even consider buying a second as a backup?

For UPS, take your pick between APC, Dell, or any other out there. I'd recommend getting a pair of 1500-2000VA, which should be 2U rack mount and around $300 each. Since you're servers have 2 PSUs, plug one PSU to each UPS. Redundancy is key.

So yeah, hardware wise:
$1500 -- Workload servers
$3000 -- Storage server with drives
$400 ---- Switches
$200 ---- Gateway
$600 ---- UPS
$5700 -- TOTAL

That should give you some seriously redundant hardware to allow you to do about anything, and plenty of money to play with if you want to expand. Need more file storage? Get another C2100 with 1 quad gigabit NIC card with 12 3TB drives in RAID10 for ~18TB ADDITIONAL storage, for around $2000. Sure, you could build something cheaper but this comes with dual PSUs, hardware that is known to be compatible, and a great hardware RAID controller.

Want to redo your wireless setup? Get 2-3 Ubiquiti UniFi APs (stick with the base model, or get the newer square one if you want 802.11AC). Base model runs around $70 for each radio, while the AC models run for around $300, I think.

(Profile says your in GA) check Craigslist near Atlanta and find you a server rack/cabinet for $300 or less and rackmount all this gear in your basement/garage. For shiggles, get a second ISP connection (say cable broadband as your primary connection, and cheap slow ADSL as a backup) and configure both to that EdgeRouter. You would have redundant ISP connection, power, network, workload server, and storage.

This is basically what I would do if I had $15k to spend, or even a third of that.

I just now saw the bottom part of your first post. I would highly recommend dual socket boards instead of quad socket boards. The only reason one should ever go for quad socket boards is if space is a serious issue (rack space typically) or you are wanting a single box to maximize folding on. I wouldn't recommend building a home server for all this either, since you will spend way more than buying Dell R610s. If you're concerned with buying used OEM servers, just grab some spare parts (spare Xeon L5520 is about $50, R610 PSU is about $75, about $25 for a single 4GB DIMM ECC Registered and $50 for a single 8GB DIMM).

Oh, just realized I forgot OS drives for your servers. I'd recommend either Server 2012 R2 with Hyper-V, or ESX. With either Hypervisor, I'd recommend a pair of 73GB 10K SAS drives in RAID1. These drives are usually about $30 each, so about $200 for 8 of them? May even find R610's with 73GB drives already in there.

OS wise for your workloads, Linux when you can and Windows when you can't. I seriously doubt you will need to do any custom kernel work, especially if your energy management & solar array software is pre-built and provided by a vendor instead of custom built. Even if it was custom built, I don't know why custom kernels would be needed.

So what to do with the remaining $9k in your budget? Build 5 new computers for you and your users/kids, if needed. i5-4670k, 16GB of RAM, 500W PSU, and GTX 770 or Radeon R9 280X should run about $900 each. Add in another $250 each if you need to buy new monitors, keyboards, and mice. So let's see...you have 5 new awesome gaming rigs, a super awesome home infrastructure that will rival many medium sized businesses with 24 cores/48 threads and ~200GB of RAM for virtualized servers (CPUs go much further than you probably realize, and you will likely only average about 5% usage at the most, if that), ~13TB of great performing and highly redundant storage, and still have around $4,000 left over.

Ship the kids off to their grandparents and take the wife out for a weekend get-away?

smile.gif
Edited by tycoonbob - 4/1/14 at 5:57am
post #7 of 22
Thread Starter 
Quote:
Originally Posted by LordOfTots View Post

Subbed biggrin.gif
Thank you for your interest. I'm sure this will be an educational experience for us all smile.gif
Quote:
Originally Posted by cones View Post

Interesting project, i would think it'd be best to use separate computers for WOW but using one sounds more fun to do biggrin.gif Does it run natively on Linux?
It does not run natively as far as I know, but VMs can do anything with enough power. I'm planning on using emulation to fill all the compatibility gaps as well. Like I said, it will take a LOT of power, but the end result will be worth it in terms of user experience and even efficiency if I can get it scaling properly while maintaining stability.


Quote:
Originally Posted by Otitis View Post

Sounds like a fun project.

This article might help you out with deciding between Intel and AMD.

http://www.cio.com/article/728094/How_to_Pick_a_CPU_When_Buying_Servers

Thanks for that, I read it last night. I would have responded then, but I use a gaming keyboard for typing and my wife was asleep lol. She gets pretty pissed if I do any typing at night, says it sounds like a machine gun firing away:) The article made some interesting points. I am curious to see some raw data regarding internal and external I/O capabilities of both parties in a multitude of real world tasks. One concern I do have right off the bat is I/O agility. Throughput is important too, but I am more concerned with the talk time between QPIs and the ACTUAL amount of bandwidth available to the PCIE bus. I see different data from different ppl. Its looks for now however that a dual socket board equipped with some high core count processors is going to allow more real world bandwidth for cloud type gaming situations. I say this because gaming will depend heavily on my GPU being able to utilize the bus in concert with the right CPU. This becomes a problem during high density user sessions, where I want to have as many as 5 (at least) ppl doing high presets at the same time. What happens when both CPUs are saturated and a job gets sent to the PCIE bus but is not yet present in the processors cache?

Which brings me around to this guy:)
Quote:
Originally Posted by tycoonbob View Post

For the WoW stuff, I'm assuming you are referring to hosting the server and have 5 separate PCs connect to your private server. If so, WoW server runs fine from Linux, in fact it's probably better than Windows.

Now if you want the server to do your GPU work as well for your WoW clients, you will need something like RemoteFX or the comparable from Citrix or VMware. Using Hyper-V on a physical server with a pair of serious GPUs would allow you to play WoW via RDP on some thin clients, allowing you to put more money into your server instead of your clients. What you will find, though, is that licensing costs of doing this are going to blow that 15K budget out of the water.

Without knowing what software you plan to use to manage your solar array and energy monitoring, I can't specifically speak to to that. However, I seriously doubt much power is needed for that. Heck, having a really nice meteorology station at home and logging to a MySQL database and running a custom PHP-based analytic website using rrdgraph would only take 1GB of RAM or less ,and minimal CPU cycles. Point is, what you're looking to do won't actually require THAT much raw CPU/RAM.

I definitely think you should use virtualization, instead of slapping this all on one physical server. Separation of roles will allow you to perform maintenance without interrupting other services. If you plan to use this server for your kids education, I would consider it quasi-mission critical. With that in mind, I think you will need more than one server and you should also focus on other considerations like redundant power such as redundant PSUs and UPSs for your hardware (I see you mention generators, but I'd still HIGHLY recommend UPS on your servers to help condition the quality of the electricity and ensure no brown-outs while the gennies are starting, and to also prevent surges), redundant network (two switches will likely suffice), and redundant storage.

Hardware-wise, definitely doable with $15k. Software costs is what will get you, unless you stick with open source.

Regardless of the workload I was planning to do, I always virtualize what I can. There are instances where virtualization isn't a good idea, but I don't see why you couldn't for your needs. Grab 3 used OEM servers from eBay (Dell C1100, HP DL160 G6, Dell R610, etc) -- something with dual Xeon L5500+ CPU (such as L5520, L5639, etc) and 48-72GB of RAM. I know it's easy to load up on hardware, but realistically you won't need 3 servers with 72GB of RAM for your workload. You will definitely hit storage I/O issues before using 200GB+ of RAM. Expect to spend about $500 per server, so that's $1,500 for your main hardware.

Get some used switches, like Dell PowerConnect 5448, which are about $150-200 each. Get two. Add a quad gigabit PICe card to each server, giving you a total of 6-8 gigabit NICs per server. Assuming you went with Dell R610 (my personal choice now, instead of the C1100), you would have 8 NICs. Use 2 NICs in LACP, and 6 NICs in MPIO on a storage network. 2 ports per switch should LACP to each other, while splitting the other NICs between the switches. For example, the 2 NICs in LACP from each server, one NIC should go to each switch and be on your LAN VLAN. The remaining 6 NICs should have 3 NICs on each switch, on your Storage VLAN.

For storage, I say get something like a Dell C2100. 12 3.5" drive bays, and typically come with x2 Xeon L5630, 24GB of RAM, and a Dell PERC H700 for around $700. Grab 4 512GB SSDs (~$350 each), and 8 3TB drives (Toshiba DT01ACA300 would be my choice -- $100 each) and have two separate RAID10 arrays. You would have ~1TB of super-fast, redundant SSD storage for your VMs (more than enough for VMs), and ~12TB of pretty fast storage on spindle (~650 IOPS). Use this for file storage, and even for VMs that don't require high I/O, even though you will have plenty of SSD storage. Add two quad gigabit NIC card to the C2100, and set up 4 NICs in LACP for your LAN VLAN (used for management and file traffic), and use the remaining 6 for your storage VLAN. Again, split these NICs across cards and switches, for redundancy. This is a full, complete storage system for around $3000. May seem like a lot, but this includes 1TB of redundant SSD storage which is more than you should need.

For your gateway, I'd recommend a Ubiquiti EdgeRouter POE, which will give you 5 ports. 1 for your WAN, 2 ports in LACP for your LAN (1 NIC to each switch), optionally leaving 2 ports which could be used for a Wireless LAN and/or DMZ networks (think guest/visitor access who you don't want on your LAN, but you want to share your internet with). These are about $200. Maybe even consider buying a second as a backup?

For UPS, take your pick between APC, Dell, or any other out there. I'd recommend getting a pair of 1500-2000VA, which should be 2U rack mount and around $300 each. Since you're servers have 2 PSUs, plug one PSU to each UPS. Redundancy is key.

So yeah, hardware wise:
$1500 -- Workload servers
$3000 -- Storage server with drives
$400 ---- Switches
$200 ---- Gateway
$600 ---- UPS
$5700 -- TOTAL

That should give you some seriously redundant hardware to allow you to do about anything, and plenty of money to play with if you want to expand. Need more file storage? Get another C2100 with 1 quad gigabit NIC card with 12 3TB drives in RAID10 for ~18TB ADDITIONAL storage, for around $2000. Sure, you could build something cheaper but this comes with dual PSUs, hardware that is known to be compatible, and a great hardware RAID controller.

Want to redo your wireless setup? Get 2-3 Ubiquiti UniFi APs (stick with the base model, or get the newer square one if you want 802.11AC). Base model runs around $70 for each radio, while the AC models run for around $300, I think.

(Profile says your in GA) check Craigslist near Atlanta and find you a server rack/cabinet for $300 or less and rackmount all this gear in your basement/garage. For shiggles, get a second ISP connection (say cable broadband as your primary connection, and cheap slow ADSL as a backup) and configure both to that EdgeRouter. You would have redundant ISP connection, power, network, workload server, and storage.

This is basically what I would do if I had $15k to spend, or even a third of that.

I just now saw the bottom part of your first post. I would highly recommend dual socket boards instead of quad socket boards. The only reason one should ever go for quad socket boards is if space is a serious issue (rack space typically) or you are wanting a single box to maximize folding on. I wouldn't recommend building a home server for all this either, since you will spend way more than buying Dell R610s. If you're concerned with buying used OEM servers, just grab some spare parts (spare Xeon L5520 is about $50, R610 PSU is about $75, about $25 for a single 4GB DIMM ECC Registered and $50 for a single 8GB DIMM).

Oh, just realized I forgot OS drives for your servers. I'd recommend either Server 2012 R2 with Hyper-V, or ESX. With either Hypervisor, I'd recommend a pair of 73GB 10K SAS drives in RAID1. These drives are usually about $30 each, so about $200 for 8 of them? May even find R610's with 73GB drives already in there.

OS wise for your workloads, Linux when you can and Windows when you can't. I seriously doubt you will need to do any custom kernel work, especially if your energy management & solar array software is pre-built and provided by a vendor instead of custom built. Even if it was custom built, I don't know why custom kernels would be needed.

So what to do with the remaining $9k in your budget? Build 5 new computers for you and your users/kids, if needed. i5-4670k, 16GB of RAM, 500W PSU, and GTX 770 or Radeon R9 280X should run about $900 each. Add in another $250 each if you need to buy new monitors, keyboards, and mice. So let's see...you have 5 new awesome gaming rigs, a super awesome home infrastructure that will rival many medium sized businesses with 24 cores/48 threads and ~200GB of RAM for virtualized servers (CPUs go much further than you probably realize, and you will likely only average about 5% usage at the most, if that), ~13TB of great performing and highly redundant storage, and still have around $4,000 left over.

Ship the kids off to their grandparents and take the wife out for a weekend get-away?

smile.gif

Shoo! Where to begin?

First of all thank you so much for your input, it was well constructed and very helpful.

Okay, as you can see I too was concerned about quad socket boards and there inherent limitations for my purpose. As to the idea of building all the kids a rig of there own, I am trying to eliminate that lol. With four kids, all homeschooling, and all showing an interest (avidly) in what daddy does, I am already up to my ears in machines. See there are issues with the ages right now and all of these rigs, monitors and the like are piled up in my room. I thought it was neat living in a giant computer lab when I was younger, but now its just getting silly:) Plus the house is approaching 200 years of age and is about to be remodeled extensively. I plan to rebuild the bathroom and the future server room first. Once the machine, or possibly thanks to your input, machines, is in place, I will begin getting nexus 10s tied to consoles for everyone. This will allow me to get rid of about 4 rigs, and for a time their respective monitors. This gets alot of space freed up and breakables out of the equation. When the remodel is complete, or close, I will bring the 65" display as well as old ***** puddin here back in the house. Then I will generate and tie a console to my current rig. My current rig is the one in my sig. She has been redubbed "b i tch puddin" however due to its finicky nature and apparent refusal to operate properly for anyone but me:)

Deep breath

As to the idea of specializing with multiple servers I am open to it. My concern leading me to attempt a one machine solution was power consumption and scalability. I already have a couple ups rigs so I have two covered. I plan to get two more and use a series of redundant server power supplies requiring a max of 8 receptacles when fully populated. This should allow me to run two on the battery side of each UPS. I am just guessing, but I believe that would be enough even during peak usage to switch over to battery or generator backup. In fact it should allow for at least 4 minutes of hold up time. I'll be going off the grid entirely at some point. This is why one machine able to efficiently scale as per user demand is ideal for me. I'll be generating my power via a combination of solar arrays and a specialty wood powered engine of my own design. It burns mostly bricks made of leaves and wax and is super cool, but more on that another time:)
Too make a long story short, we want to keep power consumption as low as possible only using what we need when we need it. Ahem, and tablets can be discreetly charged at an in laws house wink.gif If ya know what I mean?

Lastly I have not listed all the applications of this monster. Let's just say I am a free thinking American that does NOT appreciate being spied on!

Because of this, I need full global encryption and a few other neat features that I won't get into

Side note: We recently went completely off grid for 5 weeks. You would not believe the harassment. We got our own helicopter escort from dusk till dawn every night for three weeks. When we did try to reconnect to the grid we met with resistance. In fact we only meant to disconnect for 2 weeks. The rest of that time was spent fighting the power company for any reason they could think of. We were talked too like dirt, literally told we were crazy, and counseled on why no one with a family could ever live without the system.

.....

The result is that I am now more certain than ever of breaking away completely.

I doubt I addressed everything you mentioned as I am short on time at the moment. I need to get back to teaching my daughter, sooooooo.

Too sum up, I need maximum scalability, a completely unified solution, stability, and longevity. All wrapped on in one solution. As for OS I am going Linux, possibly Ubuntu (since they have much documentation on tablets/phones). So I am trying to keep it open source. I am not a super big fan of Microsoft and my kids seem keen to learn so why not raise them in a Linux environment. Especially since they have desktops and everything on Linux now any way. I should probably have mentioned this, but I am a programmer (although not a very good one) and I am comfortable with learning whatever scripting is required, which I'm guessing will be a lot.

I am liking spitting the workloads with virtualization as you mentioned. Talk to me about how well this sort of deal scales down when idle and such if you can. The price point and resultant power you mentioned sound much better than anything I had hoped for.

Crap, I am out of time, I'm sure I missed something, but I gotta go. Kids gotta learn. Again thank you so much for your input, there is a reason I've been lurking these forums for over 8 years smile.gif

I will post a sample build later. Its just something I threw together on the egg to give us something to kick around. This will be a very dynamic project and is likely to evolve much as it progresses. I am open to any suggestions and nothing is set in stone till I put a tool on it!

God bless!
post #8 of 22
Thread Starter 
O and my family and I currently play on molten's frostwolf server. Dude I forgot about server emulation, which does run excellent on linux smile.gif Probably stick with molten though as I like to PVP, and of course multibox (PVE only)

Ps. Five Frost Mages using Mirror Image at once with elementals out and icy veins will end you. It will end you - and all of your friends, and there is nothing you can do to stop it. So just lemme FARM DUDE! lol

Now I seriously gotta go.
post #9 of 22
While I'm not going to respond to each of your reply, I will say a few things.

If you want scalability and reliability, a single machine is taking about 20 steps backwards from this. The setup I gave you is not only a minimum, but a optimal setup for maximum uptime.

The great thing about those Xeon L5520 is they are low powered, and have a low TDP. My R610 with dual L5520 and 48GB of RAM, along with 4 HDDs is only pulling about 130W. You read that right. 3 servers like that, with a separate storage box, 2 switches, and appropriate network equipment will be under 1000W. Running all of that off solar and gennies will be tough, no doubt, but the key here is conditioning your power. You will probably want to make sure your UPS's can create sine-wave power output, to ensure you don't surge anything and destroy it.

Playing WoW, or any computer game, from an Android tablet is not ideal. To my knowledge, there are no open source solutions to offload GPU workload for applications (games, in your case). Microsoft has RemoteFX, Citrix has HDX 3D, and VMware has...I honestly don't know what it's called. These are all remote protocols created for offloading graphic intense applications, but not specifically designed for games. On the other hand, "MultiSeat" type software is probably what you would want. It would allow multiple displays, keyboards, and mice to work from one server, locally. SoftXpand is the software that comes to mind, but I've no experience with it. This kind of technology is great, but I just don't think it's in the open source world right now, or at least something that's reliable. Best performance for the money for 5 users is going to be mITX gaming PCs; it just is. Now if you had something like 20 users, then something like RemoteFX is probably what you'd want to aim for, but splitting 20 users with 512MB GPU RAM each means 10GB worth of GPUs. A typical GPU has what, 3GB of RAM? You would need at least 4 GPUs which are probably $300-400 each, for consumer level gear.

x3 Dell R610, each with a FirePro or Quadro card and using Microsoft RemoteFX or Citrix HDX 3D would probably be enough to get you playing WoW, but you will still need thin clients or zero clients, which still require a keyboard/mouse/monitor. Why not just build mITX gaming rigs?

Just my two cents. I've had this conversation several times with many different people, and they all turn out the same way.
post #10 of 22
subbed
Media Black Box
(14 items)
 
  
CPUMotherboardGraphicsRAM
Intel 4570 Asus h87 MSi GTX 970 Gaming G.skill 
Hard DriveHard DriveCoolingOS
crucial WD RED cm hyper 212 Windows 8,1 
MonitorKeyboardPowerCase
46 inch vizo logitech k400 rosewill capstone 550 Node 304 black 
AudioOther
5.1 surround receiver xbox 360 controller 
  hide details  
Reply
Media Black Box
(14 items)
 
  
CPUMotherboardGraphicsRAM
Intel 4570 Asus h87 MSi GTX 970 Gaming G.skill 
Hard DriveHard DriveCoolingOS
crucial WD RED cm hyper 212 Windows 8,1 
MonitorKeyboardPowerCase
46 inch vizo logitech k400 rosewill capstone 550 Node 304 black 
AudioOther
5.1 surround receiver xbox 360 controller 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Home Super Server Project and work log