Overclock.net › Forums › Specialty Builds › Servers › OCN servers.
New Posts  All Forums:Forum Nav:

OCN servers. - Page 4

post #31 of 35
Quote:
Originally Posted by parityboy View Post

@plan9
This is true, but with a multi-cored system, you'll get better utilization of the cores by using virtualisation rather than bare metal. If you simply have two instances of httpd running (one on each physical server) and one crashes, you've only got one left.
If through virtualisation, you have six running on each node and one or two crash, you still have 10 instances running that the load balancer can send requests to. Yes, if a box dies that's six instances gone, but you still have six running.
If you go the bare-metal route, you'll have to buy 12 boxes to achieve the same level of redundancy. That's 12 times the CapEx and 12 times the OpEx.
EDIT:
One thing I forgot to add is the cloud. Virtualisation enabled the environment to be clouded, so that more virtual instances on more physical servers can be automatically spun up to cope with load spikes, such as when nVidia or AMD lift the NDA on a new graphics card... tongue.gif

I agree with the increased cost requirements, but it is silly to run multiple VMs hosting *the same content* when the VMs are on the same physical machine. It is more efficient to let the VMs use more resources and spread them across more physical machines. Why? Because no one uses 2-3 physical machines with 1 VM each in high availability scenarios (web-hosting). You must always operate knowing that your acceptable load is always under 60%. Why? So you can have those failures, and they do not impact anything.
post #32 of 35
Quote:
Originally Posted by parityboy View Post

@plan9
This is true, but with a multi-cored system, you'll get better utilization of the cores by using virtualisation rather than bare metal.
why would you?
Not disputing what you're saying per se, but I'd love to see an explanation why as -if it's true- it would be very relevant to some of the work I'm currently managing smile.gif
Quote:
Originally Posted by parityboy View Post

If you simply have two instances of httpd running (one on each physical server) and one crashes, you've only got one left.
That doesn't happen though. If a httpd process dies it automatically gets re-spawned - regardless of if that's part of a managed life cycle or a seg fault.
Quote:
Originally Posted by parityboy View Post

If through virtualisation, you have six running on each node and one or two crash, you still have 10 instances running that the load balancer can send requests to. Yes, if a box dies that's six instances gone, but you still have six running.
but six instances all running on less than 1/6th of the resources (after the OS overhead is taken into account) is less than 100% of the resources when running on bear metal.
In this instance, less is more.
Quote:
Originally Posted by parityboy View Post

If you go the bare-metal route, you'll have to buy 12 boxes to achieve the same level of redundancy. That's 12 times the CapEx and 12 times the OpEx.
But you don't have more redundancy. httpd crashes don't happen and hardware faults would affect your resources exactly the same regardless of virtualisation.
Quote:
Originally Posted by parityboy View Post

One thing I forgot to add is the cloud. Virtualisation enabled the environment to be clouded, so that more virtual instances on more physical servers can be automatically spun up to cope with load spikes, such as when nVidia or AMD lift the NDA on a new graphics card... tongue.gif
Indeed, that's one of the great benefits of virtualisation, but that would be running in a whole other type of data centre than the one we're discussing here
post #33 of 35
Quote:
Originally Posted by Plan9 View Post

why would you?
Not disputing what you're saying per se, but I'd love to see an explanation why as -if it's true- it would be very relevant to some of the work I'm currently managing smile.gif

I don't have all of the details, but I think it's related (in this specific case) to the requests-per second-per core metric, which in turn is related to CPU context switching and network interrupts. I think that you get better utilisation by running multiple contained instances of Apache with a lower number of spawned processes per VM, as opposed to running one bare-metal instance which spawns a huge number child processes across all of the cores.

However, this might not be true for lighttpd which uses a different request handling model, or for something like MySQL or PostgreSQL. Databases tend to run better on bare metal for write-intensive stuff, although they do OK on SANs, and things like memcached are a great help.
Quote:
Originally Posted by Plan9 View Post

That doesn't happen though. If a httpd process dies it automatically gets re-spawned - regardless of if that's part of a managed life cycle or a seg fault.

Ahhh, didn't know that. Does that apply to other server processes such as mysqld?

Quote:
Originally Posted by Plan9 View Post

But you don't have more redundancy. httpd crashes don't happen and hardware faults would affect your resources exactly the same regardless of virtualisation.

You're absolutely bang-on about hardware faults taking down all of the VMs running on a node, which is why even with (or especially with) servers, you have redundant everything. As to httpd processes not crashing, see your quote above... tongue.gif

I suppose ultimately, virtualisation makes managing resources easier and more efficient. It's easier to determine how much power you can squeeze out of a singe CPU core using a single VM instance, then multiply that out by the number of CPU cores/threads.

Obviously there are caveats, such as making sure you have enough network capacity attached to the VMs, and not depending on virtual disk performance (which is usually crap, which is why dedicated SANs are much better for VM environments).
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #34 of 35
Quote:
Originally Posted by RussianGrimmReaper View Post

I agree with the increased cost requirements, but it is silly to run multiple VMs hosting *the same content* when the VMs are on the same physical machine. It is more efficient to let the VMs use more resources and spread them across more physical machines. Why? Because no one uses 2-3 physical machines with 1 VM each in high availability scenarios (web-hosting). You must always operate knowing that your acceptable load is always under 60%. Why? So you can have those failures, and they do not impact anything.

The thing is that on a "sharded" site, each "shard" will not be hosting *the same content*, but a portion of the content. A "shard" might be a cluster of VMs, with each VM doing a specific job (httpd, mysqld slave, memcached client, forum software).

That cluster of VMs might be replicated across multiple hardware nodes. Other shards might be set up similarly, with a replicated master database at the back, and a load balancer at the front.
Edited by parityboy - 4/20/12 at 6:05pm
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #35 of 35
Quote:
Originally Posted by parityboy View Post

I don't have all of the details, but I think it's related (in this specific case) to the requests-per second-per core metric, which in turn is related to CPU context switching and network interrupts. I think that you get better utilisation by running multiple contained instances of Apache with a lower number of spawned processes per VM, as opposed to running one bare-metal instance which spawns a huge number child processes across all of the cores.
However, this might not be true for lighttpd which uses a different request handling model, or for something like MySQL or PostgreSQL. Databases tend to run better on bare metal for write-intensive stuff, although they do OK on SANs, and things like memcached are a great help.
Ahh I see. Thanks for the info (repped).
You've given me something to research and potentially (hopefully) squeeze more power out of our data centre at work.
Quote:
Originally Posted by parityboy View Post

Ahhh, didn't know that. Does that apply to other server processes such as mysqld?
To be honest I really don't know (would be a nice feature if it did though). I've only seen this happen on Apache.
Quote:
Originally Posted by parityboy View Post

You're absolutely bang-on about hardware faults taking down all of the VMs running on a node, which is why even with (or especially with) servers, you have redundant everything. As to httpd processes not crashing, see your quote above... tongue.gif
I suppose ultimately, virtualisation makes managing resources easier and more efficient. It's easier to determine how much power you can squeeze out of a singe CPU core using a single VM instance, then multiply that out by the number of CPU cores/threads.
Obviously there are caveats, such as making sure you have enough network capacity attached to the VMs, and not depending on virtual disk performance (which is usually crap, which is why dedicated SANs are much better for VM environments).
Indeed. Thanks for your input mate smile.gif
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › OCN servers.