Overclock.net › Forums › Specialty Builds › Servers › SANs & VMs: A Question
New Posts  All Forums:Forum Nav:

SANs & VMs: A Question - Page 3

post #21 of 32
Thread Starter 
Quote:
Originally Posted by woonasty View Post
point #1 - is head on imho

point #2 - i think that can be up to debate in terms of pro's and con's, and what you are trying to do, kinda like SAN vs. DAS
Erm you've lost me. Can you clarify?

Edit:

Just realised you were referring to my OP, not my last post. Thanks.
Edited by parityboy - 3/5/11 at 7:13am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #22 of 32
Quote:
Originally Posted by parityboy View Post
Another question. I understand how to achieve redundancy via replication between the SAN nodes (using something like DRBD for block-level mirroring) but what happens when one node dies? How does the "front" server fail over to the other node(s)?

I know Solaris has IPMP where the SAN nodes are pooled under a group IP address, but how would a Linux system achieve this?
If I remember right, you need to implement vCenter to configure High-Availability and DR options in vSphere...

Remember that the VM files are still sitting on a shared SAN, so in terms of migrating between nodes, it's not a long or complex process... but vCenter is needed to manage it.
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #23 of 32
@Comguards: That's exactly what I'm working on right now. You need to run vCenter on a seperate server (could be virtualised on one of your cluster servers afaik).
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
post #24 of 32
Thread Starter 
Many thanks for the the replies, it's much appreciated. While I do I realise that it's natural to talk in terms of VMware when discussing virtualisation options (I myself have used VMware since the days of VMware Workstation 1.0), being a start-up we'll not necessarily have a big budget to throw at things like VMware, so obviously other solutions have to be looked at.

Obviously, VMware is a contender but others are being considered, such as Virtual Iron and KVM. This is why the questions I ask are a little more generalised.

What I would like to achieve is to be able to address the SAN cluster through a single IP, so that even if a SAN node dies, the front server (the filer) will not notice. I know this can be achieved on Solaris with IPMP, and obviously high-end stuff like VMware will have this feature (or something like it).

Can a managed switch be configured to present a single IP for a given set of ports, or is this something that has to be configured in an operating system on a server host?
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #25 of 32
Quote:
Originally Posted by parityboy View Post
Many thanks for the the replies, it's much appreciated. While I do I realise that it's natural to talk in terms of VMware when discussing virtualisation options (I myself have used VMware since the days of VMware Workstation 1.0), being a start-up we'll not necessarily have a big budget to throw at things like VMware, so obviously other solutions have to be looked at.

Obviously, VMware is a contender but others are being considered, such as Virtual Iron and KVM. This is why the questions I ask are a little more generalised.

What I would like to achieve is to be able to address the SAN cluster through a single IP, so that even if a SAN node dies, the front server (the filer) will not notice. I know this can be achieved on Solaris with IPMP, and obviously high-end stuff like VMware will have this feature (or something like it).

Can a managed switch be configured to present a single IP for a given set of ports, or is this something that has to be configured in an operating system on a server host?
Well yes, so being a start-up, you need to plan out your infrastructure carefully... Honestly, I'm wondering if you should even be considering a SAN in your initial infrastructure. A single Dell PowerEdge R510 loaded with drives should provide more than enough capabilities for initial requirements, no? The R510 takes up to 12 drives...
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #26 of 32
Thread Starter 
Indeed I agree with you on both points - I'm just trying to avoid having to do migrations as we grow. If I can get the base infrastructure in now (or as early as humanly possible), it will avoid disruption later.

Obviously it may well come out of my research that I don't need a SAN-type infrastructure at this early stage, but I would like to know what's out there and what it's capable of (generally) to help me decide. I don't actually have the hardware lying around to be able set up a test lab, hence the research and questions.
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #27 of 32
Quote:
Originally Posted by parityboy View Post
Indeed I agree with you on both points - I'm just trying to avoid having to do migrations as we grow. If I can get the base infrastructure in now (or as early as humanly possible), it will avoid disruption later.

Obviously it may well come out of my research that I don't need a SAN-type infrastructure at this early stage, but I would like to know what's out there and what it's capable of (generally) to help me decide. I don't actually have the hardware lying around to be able set up a test lab, hence the research and questions.
"... avoid having to do migrations..." That's most of the fun!!

You can't avoid it . You have to plan for it. Of course, if you plan for anticipated company growth for the next 5 years, then you're off to a good start. 5 years is roughly when you should be doing a hardware refresh anyways.
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #28 of 32
Thread Starter 
hehe, tell you what, I'll put "fun" in quotes and we'll call it quits. Well SAN or no SAN, I'll definitely be ensuring we have a solidly virtualised infrastructure, even if (or especially if) we only start out with the one box...
Edited by parityboy - 3/6/11 at 8:09pm
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #29 of 32
Quote:
Originally Posted by parityboy View Post
Many thanks for the the replies, it's much appreciated. While I do I realise that it's natural to talk in terms of VMware when discussing virtualisation options (I myself have used VMware since the days of VMware Workstation 1.0), being a start-up we'll not necessarily have a big budget to throw at things like VMware, so obviously other solutions have to be looked at.

Obviously, VMware is a contender but others are being considered, such as Virtual Iron and KVM. This is why the questions I ask are a little more generalised.

What I would like to achieve is to be able to address the SAN cluster through a single IP, so that even if a SAN node dies, the front server (the filer) will not notice. I know this can be achieved on Solaris with IPMP, and obviously high-end stuff like VMware will have this feature (or something like it).

Can a managed switch be configured to present a single IP for a given set of ports, or is this something that has to be configured in an operating system on a server host?
Point taken

As for the IP addresses: Not really.

For network load balancing, I'd set up link aggregation so you get more bandwidth. All managed switches should support this at least in the form of LACP.
As for machine load balancing, you probably need another machine, unless the switch has that feature (look for "round robin load balancing", for instance)
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
post #30 of 32
Thread Starter 
Quote:
Originally Posted by citruspers View Post
Point taken

As for the IP addresses: Not really.

For network load balancing, I'd set up link aggregation so you get more bandwidth. All managed switches should support this at least in the form of LACP.
As for machine load balancing, you probably need another machine, unless the switch has that feature (look for "round robin load balancing", for instance)
Cheers for that. So...how do replicated SAN clusters work? If a node in the cluster dies, how does the filer deal with it? How does the filer failover to a working node? Does the filer need some kind of heartbeat package running on itself and the SAN nodes?
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › SANs & VMs: A Question