Overclock.net › Forums › Specialty Builds › Servers › Work Log C1100
New Posts  All Forums:Forum Nav:

Work Log C1100 - Page 2

post #11 of 21
Quote:
Originally Posted by Sarec View Post

Didn't think anyone would reply during my editing(modified my previous post). My mistake.

Any thoughts to the how I should setup my storage? And what are your thoughts as to WD FALS1001 Black drive being used for the Hypervisor?

I cannot pay for an SSD at this time but that doesn't mean I cannot optmize in terms of HDDs.

EDIT - Change the drive list on my OP.

Well, I'll start by saying that I'm a Hyper-V kinda guy, myself. Yes, I run Linux on Hyper-V (CentOS/RHEL mainly), and I prefer Hyper-V to every other hypervisor out there. With Server 2012, I use a 120GB SSD along with a 2TB 7200RPM drive in a Tiered Simple Mirror Space, which is dedicated to VM storage. I currently have 16 VMs running on one of my C1100s with this setup, and it provides more than enough IOPS. My point is, if you can find any way to do a storage scenario with tiered or caching to SSDs, I highly recommend it. If not, I have had good luck with 2 7200RPM drives in RAID1 for VM storage (I/O slowness came around 10 VMs, or so). I personally prefer Toshiba/Hitachi drives, but Blacks are my second choice. Using them for your VM storage should be fine, but you will be severely limited by IOPS once you get a few VMs running.

If possible, I'd consider a standalone storage device and utilize iSCSI. Even if it's a FreeNAS box, a couple drives in some sort of a RAID10 will give you much better IOPS than a single 7200RPM used locally.
post #12 of 21
Thread Starter 
Quote:
Originally Posted by tycoonbob View Post

Well, I'll start by saying that I'm a Hyper-V kinda guy, myself. Yes, I run Linux on Hyper-V (CentOS/RHEL mainly), and I prefer Hyper-V to every other hypervisor out there. With Server 2012, I use a 120GB SSD along with a 2TB 7200RPM drive in a Tiered Simple Mirror Space, which is dedicated to VM storage. I currently have 16 VMs running on one of my C1100s with this setup, and it provides more than enough IOPS. My point is, if you can find any way to do a storage scenario with tiered or caching to SSDs, I highly recommend it. If not, I have had good luck with 2 7200RPM drives in RAID1 for VM storage (I/O slowness came around 10 VMs, or so). I personally prefer Toshiba/Hitachi drives, but Blacks are my second choice. Using them for your VM storage should be fine, but you will be severely limited by IOPS once you get a few VMs running.

If possible, I'd consider a standalone storage device and utilize iSCSI. Even if it's a FreeNAS box, a couple drives in some sort of a RAID10 will give you much better IOPS than a single 7200RPM used locally.

I plan on using CentOS for most of my servers even though I am used to debian style. The raid 10 is not an option at the moment unless I used both my 1.5 and 1tb drives and that just is not a good use of them at this moment. I will have to make due with raid 1 for now. When I get some spare cash I plan to get an SSD. Don't know much about SSD's since I have never had one. Read about them a bit but I learn by doing more than random knowledge. Any suggestions for SSD for me to shop for overtime? Getting more large HDDs in time will be a need but I hope I can make do with this right now.

Thanks for the tips.

EDIT - Just found the info on Tiered Storage that you mentioned a number of times before. SSD + HDD in that setup would be quite nice to have. Wonder if linux has something similar? Off to research!
Edited by Sarec - 11/15/13 at 7:11am
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
post #13 of 21
Thread Starter 
Well, slight hitch in my plans. The hardware I was going to use for the ZFS storage was a HP Proliant DL360 but it is G4 and not just any drives will work. More to the point it will only support two U320 drives. The drives it came with are 18GB and 36GB U320 drives, 15k and 10k respectively. Unfortunately all my other old systems are broken in ways that make them unreliable.

So with this development I think I will be forced to use Xen(derivative or not) so I can manage the the software raid storage from the C1100. Since it has 4 drives available I think I will set them up like this:

This isn't the ideal situation but means I do not have to change my windows machine at the moment. When I get my SSD I will do a new setup and move the vms.

Configuration:
2 x 250GB Drives Raid 1 AND 0(Different partitions)
2 x 1.5TB WD Green Raid 1

The stripe partitions are for the VMs(homes) and the mirror for /boot and /. After doing a lot of reading I decided to setup as a software raid and LVM. This will allow me to control the filesystems and make migration to SSD easier later on.

Is there anything I should be aware of in linux when migrating from HDD to SSD in the future? Have not found much information on flaws. However, I have experience that lesson that I should not assume it is safe.
Edited by Sarec - 11/16/13 at 11:08pm
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
post #14 of 21
Thread Starter 
Got two hdds, one with xenserver and the other with VMware. My storage setup proved to be quite a complicated issue considering my lack of equipment. So for right now I plan on just setting up my VMs and saving them for when I change the hardware of the system.

Despite turning off the firewall to vSphere I still cannot ping my VMs from my personal machine. We share the same network and the VMs are able to go out to the internet. This was not a difficult task using the vmplayer on my personal machine. I understand that this is a enterprise grade software but there should be some way to do this. What am I missing?

EDIT - I feel like an idiot. MineOS only accepts the http requests via https. I was typing in 192.168.11.199/admin over and over to no avail. Hooray for 4am network admin.

And we have LIFT OFF. Got a simple MineOS vm up and running. Tested it with a friend logging in. Now to setup my firewall server and many others.
Edited by Sarec - 11/22/13 at 5:06am
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
post #15 of 21
I'm an infrastructure architect by trade and specialize in virtualisation technologies. I am primarily a VMWare man, have built and managed a number of large VMWare ESXi farms with vCloud sitting ontop as well. I have also been project lead for a couple of Citrix XenApp rollouts and built several hyper-v clusters since its release.

Just recently I have been trying to go open source wherever possible and as such I came across ProxMox, which is based primarily on the KVM hypervisor. So I built 3 proxmox nodes on my development hardware, clustered them, connected iSCSI storage and configured HA. I have to say, this is an excellent product and I am considering building a case to start replacing our VMWare farms with it.

Massive cost saving compared to VMWare in a business environment.

Long story short. The boxes I installed proxmox on have intel ICH10R's in. No problems here.
TJ07 Type R Build
(78 photos)
  
Reply
TJ07 Type R Build
(78 photos)
  
Reply
post #16 of 21
Thread Starter 
Quote:
Originally Posted by The_Rocker View Post

Long story short. The boxes I installed proxmox on have intel ICH10R's in. No problems here.

Hmm interesting. I know linux can work the software raid like no ones business. My issue for not messing with proxmox came from its outdated kernel.
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
Windfall
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9550 Asus P5E3 Premium Wifi XFX HD Radeon 6950 SuperTalent 
Hard DriveOptical DriveOSMonitor
WADFALS1001 WD Black Edition 1TB x 2 Raid 0 DVD-RW DIE VISTA, Using W7 24Inch 1920x1200 
PowerCase
Corsair 750TX Corsair 800D 
  hide details  
Reply
post #17 of 21
Quote:
Originally Posted by Sarec View Post

Hmm interesting. I know linux can work the software raid like no ones business. My issue for not messing with proxmox came from its outdated kernel.

Well Promox VE 3.1 just came out but its still based on the 2.6.32 kernel. But at the end of the day, if it works, it works.
TJ07 Type R Build
(78 photos)
  
Reply
TJ07 Type R Build
(78 photos)
  
Reply
post #18 of 21
Quote:
Originally Posted by The_Rocker View Post

Well Promox VE 3.1 just came out but its still based on the 2.6.32 kernel. But at the end of the day, if it works, it works.

Really? I swear it came out over half a year ago. It works and I've had no security issues with it....... touch wood.
post #19 of 21
Quote:
Originally Posted by The_Rocker View Post

I'm an infrastructure architect by trade and specialize in virtualisation technologies. I am primarily a VMWare man, have built and managed a number of large VMWare ESXi farms with vCloud sitting ontop as well. I have also been project lead for a couple of Citrix XenApp rollouts and built several hyper-v clusters since its release.

Just recently I have been trying to go open source wherever possible and as such I came across ProxMox, which is based primarily on the KVM hypervisor. So I built 3 proxmox nodes on my development hardware, clustered them, connected iSCSI storage and configured HA. I have to say, this is an excellent product and I am considering building a case to start replacing our VMWare farms with it.

Massive cost saving compared to VMWare in a business environment.

Long story short. The boxes I installed proxmox on have intel ICH10R's in. No problems here.

You're recommending to your customers that they use proxmox instead of VMware? Does proxmox even offer enterprise support? I would never recommend running something like that as the base infrastructure for a large enterprise. VMware is my number two choice, behind Hyper-V, and XenServer is my number 3 (this is all assuming 75%+ windows with the rest Linux [RHEL/CentOS or Deb] and maybe a dash of BSD).
post #20 of 21
One thing I know Vmware has as an advantage is integration. Being the 900lb virtualisation gorilla, many other vendors (such as Dell) have their tools integrated into VMware. As an example, in this article, the management of the EqualLogic iSCSI pod is plugged straight into the VMware Console.

I don't think Proxmox or even XenServer has this. Obviously though, it depends on the needs of the environment it's going into.
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Work Log C1100