Overclock.net › Forums › Specialty Builds › Servers › HyperV Layout
New Posts  All Forums:Forum Nav:

HyperV Layout

post #1 of 6
Thread Starter 
due to acquiring an adaptec 5805 to replace my perc I have planned on redesigning my current layout to try and make it more efficient.

As it stands I have 2x 500gb in raid 1 for my OS and my VM's, then 2 arrays of 3 2tb hard disks in raid 5, 1 array mounted directly to my file server and my streaming server.

My plan is to combine all disks into raid 6 on the arrival of my new card and move my 2x 500gb onto 2 ports of the raid card from the onboard raid its currently sitting on.

Instead of mounting physical disks to my vm's I plan to create 1 large raid 6 array and use vhdx to create large storage drives for each of my main server vm's.

Heres my questions.

is it safe to turn all my physical files into virtual ones???

would it be a better idea to create my VM OS boot drives as vhd's on the 2x 500gb drives or on the raid 6?? does it really matter for windows??

I have a 2gb lacp link from server to switch and now 2gb lacp links to my main system and main media centre. kids just use single gb and wifi for plex and Netflix.

Just thought I would add, all my important files like my photos and videos of the kids are backed up to 50gb blu ray every once in a while and I also plan on a small synology nas stored elsewhere in the house away from my server as an iscsi target for weekly incremental backups.
Edited by andymiller - 8/30/13 at 11:51pm
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
post #2 of 6
RAID 6 for VM storage is going to be about as bad as you can do it. RAID 6 has a huge write penalty, and is mainly used for data that is "write once, read many". I recommend figuring out how to get a RAID 10 going for your VM storage. Disk performance will be so much better.
post #3 of 6
Thread Starter 
think ill keep the vm's themselves on the raid 1 and the raid 6 for the data.

as for the raid 6, I tried creating 2 vhdx's earlier on the same raid 5 array and writing between the 2 vhd's was terribly slow, don't know whether this was due to being on a 3 disk raid 5 or this is the penalty for writing to vhd's on the same array.

question is what is the best way to arrange my raid based on needing to keep 2 3tb physical/virtual drives, 1 for each server.
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
post #4 of 6
I would recommend a RAID1 for the boot drives for the VMS and then do a RAID6 with the 1TB drives for storage. I normally to 20GB for boot drives for windows 2008r2 setups. Then add separate large drives for bulk data and put those on the RAID6.
Black Lightning
(24 items)
 
  
CPUMotherboardGraphicsGraphics
4770K Gigabyte Z87 OC Force Sapphire 290x  Sapphire 290x  
RAMHard DriveHard DriveHard Drive
Crucial Ballistix Tactical LP Toshiba Q Series Pro Samsung 840 Pro WD Black 
Hard DriveCoolingCoolingCooling
Seagate XSPC Raystorm EK-FB GA Z87X-OC Force EK-FC R9-290X - Acetal+Nickel  
CoolingCoolingCoolingCooling
EK-FC R9-290X - Acetal+Nickel  XSPC EX420 XSPC D5 Dual Bay Reservoir/Pump Combo External Chiller 
OSMonitorKeyboardPower
Windows 10 Dell U2210H x 5 On Custom Mount Logitech Orion Spark G910 Silverstone ST1000-G Evolution 
CaseMouseMouse PadAudio
Rosewill Blackhawk-ULTRA Logitech G600 Razer Destructor Sony 5.1 Surround Sound with Powered Subwoofer 
  hide details  
Reply
Black Lightning
(24 items)
 
  
CPUMotherboardGraphicsGraphics
4770K Gigabyte Z87 OC Force Sapphire 290x  Sapphire 290x  
RAMHard DriveHard DriveHard Drive
Crucial Ballistix Tactical LP Toshiba Q Series Pro Samsung 840 Pro WD Black 
Hard DriveCoolingCoolingCooling
Seagate XSPC Raystorm EK-FB GA Z87X-OC Force EK-FC R9-290X - Acetal+Nickel  
CoolingCoolingCoolingCooling
EK-FC R9-290X - Acetal+Nickel  XSPC EX420 XSPC D5 Dual Bay Reservoir/Pump Combo External Chiller 
OSMonitorKeyboardPower
Windows 10 Dell U2210H x 5 On Custom Mount Logitech Orion Spark G910 Silverstone ST1000-G Evolution 
CaseMouseMouse PadAudio
Rosewill Blackhawk-ULTRA Logitech G600 Razer Destructor Sony 5.1 Surround Sound with Powered Subwoofer 
  hide details  
Reply
post #5 of 6
Thread Starter 
Would it be better to create large logical drives on the raid config or use vhdx's for the data drives? What would be faster? And would I be able to expand the raid 6 once I've created more than 1 logical drive on the array as I was never able to do that with my perc 5.

I would like to swap out to 6 3tb drives after Christmas, I would drop the 2 parity drives to add 2 new drives, create those as a raid 1 and then expand back to raid 6 once I've transferred the data off the old 2tb drives, that's the way I swapped from 1tb drives to 2tb last time.
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
Gaming Rig
(12 items)
 
Main Server
(13 items)
 
 
CPUMotherboardGraphicsRAM
Intel i5 2400k Asus P8H77-M Onboard HDMI 16GB Gskill Ripjaw 1600 
Hard DriveOSMonitorPower
6x Sammy 2TB F4 (Perc 5i On LSI Firmware w/BBU) Windows Hyper-V 2008 R2 32" LG 1080p LCD via HDMI OCZ 700W Modular 
Case
Fractal Designs R3 
  hide details  
Reply
post #6 of 6
In theory there shouldn't be much difference in performance between physically mounting disks and just using vhd's on physical disks....

For performance / redundancy you really want to run a RAID10. think of it as a mirrored RAID 0. Because thats what it is.

At work on our HP P2000 MSA I run 3 x RAID5 arrays per shelf, each array has 3 drives + 1 hotspare. Its fast and should a drive in each of the arrays fail and the array loose its redundancy, the hot spare takes over.

The only thing that puts people off RAID10 is the sacrifice in capacity.
TJ07 Type R Build
(78 photos)
  
Reply
TJ07 Type R Build
(78 photos)
  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › HyperV Layout