Overclock.net › Forums › Specialty Builds › Servers › SANs & VMs: A Question
New Posts  All Forums:Forum Nav:

SANs & VMs: A Question

post #1 of 32
Thread Starter 
This is just to clarify things in my own head. I've been doing some reading on SANs, and trying to learn about zoning etc.

1) When people say they have a SAN serving virtual machines, do they simply mean that a given VM's OS "disk" is in fact an iSCSI target, and that every read and write operation performed by the guest OS travels over the iSCSI network to the LUN? Or do they mean something else?

2) RAID over a SAN. Is this worth it, or is the latency and/or overhead too much to make it viable? Is something like ZFS worth deploying on the "front server" with the iSCSI targets (JBOD) behind the server added to its zpool?

I'll probably have more questions, but this will do for now. Many thanks.
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #2 of 32
1: pretty much, though it might be more efficient to have one disk in the server as the OS disk, and use the SAN for large files the server needs (databases, etc.)
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
post #3 of 32
Quote:
Originally Posted by parityboy View Post
This is just to clarify things in my own head. I've been doing some reading on SANs, and trying to learn about zoning etc.

1) When people say they have a SAN serving virtual machines, do they simply mean that a given VM's OS "disk" is in fact an iSCSI target, and that every read and write operation performed by the guest OS travels over the iSCSI network to the LUN? Or do they mean something else?

2) RAID over a SAN. Is this worth it, or is the latency and/or overhead too much to make it viable? Is something like ZFS worth deploying on the "front server" with the iSCSI targets (JBOD) behind the server added to its zpool?

I'll probably have more questions, but this will do for now. Many thanks.
Means that the datastore for the virtual machines is a SAN... either iSCSI, Fiber, NFS, etc etc.

In such a scenario, you would definitely need a dedicated network for the SAN, and a dedicated network (or two or more) for the virtual machines.

Don't RAID over a SAN. Your SAN should have hardware RAID already. Something like a Dell PowerVault or Equilogic unit... or even a dedicated server running Windows Storage Server 2008 configured as an iSCSI target.

With regards to ZFS... maybe have a look at this link.

At least for VMWare anyways.
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #4 of 32
Even FreeNAs would do for a RAID SAN
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
post #5 of 32
Quote:
Originally Posted by citruspers View Post
Even FreeNAs would do for a RAID SAN
Or OpenFiler.


I had that running a while ago...
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #6 of 32
Thread Starter 
Quote:
Originally Posted by ComGuards View Post
Means that the datastore for the virtual machines is a SAN... either iSCSI, Fiber, NFS, etc etc.

In such a scenario, you would definitely need a dedicated network for the SAN, and a dedicated network (or two or more) for the virtual machines.
Cheers for that. So, which is "better"? Having a VM's "disk" accessed as an iSCSI target (block device), or having it as a virtual disk (i.e. a large file in a directory) made available via NFS? Or is the performance difference negligible between the two?

Edit:

I took a look at the link you provided. I'm going to have to read the article a couple more times to fully understand it.
Edited by parityboy - 2/26/11 at 4:15pm
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #7 of 32
Quote:
Originally Posted by parityboy View Post
Cheers for that. So, which is "better"? Having a VM's "disk" accessed as an iSCSI target (block device), or having it as a virtual disk (i.e. a large file in a directory) made available via NFS? Or is the performance difference negligible between the two?

Edit:

I took a look at the link you provided. I'm going to have to read the article a couple more times to fully understand it.
Well, it depends on your target device, VM role, etc etc.

I run some VMs on my Iomega iSCSI SAN, and I know it's a slow device, since the 4x1TB drives are Seagate-LP drives, and it's not a potent RAID-5 array... but for the VMs that are on it, they're responsive enough.

If you're running a SQL VM, you should be using a datastore that's local to your host - though in general, SQL databases are suggested to stay on physical machines.

If you had a NetGear ReadyNAS Pro, for example, you really wouldn't notice much of a difference between the two since that's one device I know of that you can saturate a gigabit link with.

Okay, to answer your question, for most purposes, I think the performance is negligible. If you're going into a large enterprise... I'd go with a VMWare-approved host+iSCSI combination to ensure I get maximum support from all the vendors.
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
post #8 of 32
I'm with Comguards here. I would assume the difference is mostly negledgible, if you have a properly setup network. The biggest impact would be access times, but I doubt it makes a huge difference, unless your network loops through 5 Sweex switches.
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 SLACR @ 3.6 GHz Asus P5E Deluxe MSI 6950 2 GB + 9800GT (PhysX) 4 GB White Lake DDR2-800 
Hard DriveOptical DriveOSMonitor
Hitachi 500 GB Sata iHas 120 Windows 7 Pro x64 u2711 (27", 2560x1440, H-IPS) 
KeyboardPowerCaseMouse
Generic Dell Combat Power 750W Aerotech PGS Bx-500 Logitech Rx300 
Mouse Pad
Desk 
  hide details  
Reply
post #9 of 32
Thread Starter 
Guys, thanks for the responses. I was originally going to have the VMs pulled off the main server's RAID10 array; now I'm wondering whether to put them on the SAN instead and expose them as iSCSI LUNs, and have them replicated between the two SAN boxes.

Having said that though, for cloning purposes it might be easier to have them as disk image files, and have them accessed via NFS from the SAN box(es).


EDIT:

Does anyone know of an enterprise storage forum where these issues are discussed exclusively? I'm trying to learn as much as I can.
Edited by parityboy - 2/27/11 at 9:16am
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #10 of 32
Quote:
Originally Posted by parityboy View Post
Guys, thanks for the responses. I was originally going to have the VMs pulled off the main server's RAID10 array; now I'm wondering whether to put them on the SAN instead and expose them as iSCSI LUNs, and have them replicated between the two SAN boxes.

Having said that though, for cloning purposes it might be easier to have them as disk image files, and have them accessed via NFS from the SAN box(es).
"Cloning" purposes? What for?
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
ESXi Host 1
(15 items)
 
  
CPUMotherboardGraphicsRAM
(2x) Intel Xeon E5520 Dell OnBoard Matrox G200 24GB DDR3 12x2GB UDIMMS (18 slots total) 
Hard DriveHard DriveHard DriveHard Drive
PERC6-RAID50 Intel 730 480GB Intel 320 300GB Synology DS414 iSCSI SAN 
OSMonitorKeyboardPower
VMWare vSphere5 Enterprise Plus Dell iDRAC6 Remote Management [KVM-Over-IP] Dell iDRAC6 KVM Dell Hot-Swap Redundant 1100W 
CaseMouse
Dell PowerEdge T710 Stock Dell iDRAC6 KVM 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › SANs & VMs: A Question