Overclock.net › Forums › Specialty Builds › Servers › mdadm vs. FlexRAID
New Posts  All Forums:Forum Nav:

mdadm vs. FlexRAID - Page 3

post #21 of 28
Quote:
Originally Posted by cubanresourceful;13379517 
You can also, if I remember correctly, replace one hard drive with a larger one in the vdev to increase storage. Though, due to RAID, your storage will be based off of the smallest and slowest drive (maybe RAID 5/6 is different?).

So, to increase a RAID 5/6 vdev, just upgrade your parity drive first, wait for it to rebuild, then upgrade each drive, again waiting for the array to rebuild.

I am pretty sure that works, but again, I could be wrong. I can test in a VM if you would like, Solaris is very interesting to play with, and ZFS is a pretty cool FS. biggrin.gif

Yes, this is possible. But (I believe) you need to replace all the drives at once, and migrate from one set of drives to the other (so you need all the new drives at once, plus enough power & SATA ports to connect them all).

It's also worth pointing out that there is no parity drive(s). Parity is distributed in RAID5/6/Z/Z2, not on a specific drive(s).
post #22 of 28
Quote:
Originally Posted by the_beast;13379760 
Yes, this is possible. But (I believe) you need to replace all the drives at once, and migrate from one set of drives to the other (so you need all the new drives at once, plus enough power & SATA ports to connect them all).

It's also worth pointing out that there is no parity drive(s). Parity is distributed in RAID5/6/Z/Z2, not on a specific drive(s).

Ah, I was incorrect about that, I apologize. Replacing all drives at once of course works, you would create the new vdev, copy the data over from the old vdev, then destroy the old vdev. But the older set of drives are useless, unless you create a new vdev. I was merely mentioning that replacing one drive at a time with a larger one would work in increasing the current vdev, sorry for any confusion. smile.gif

Also, for anybody who wants to know, FreeNAS 8 was released earlier this week! biggrin.gif
post #23 of 28
Quote:
Originally Posted by cubanresourceful;13379836 
Ah, I was incorrect about that, I apologize. Replacing all drives at once of course works, you would create the new vdev, copy the data over from the old vdev, then destroy the old vdev. But the older set of drives are useless, unless you create a new vdev. I was merely mentioning that replacing one drive at a time with a larger one would work in increasing the current vdev, sorry for any confusion. smile.gif

Also, for anybody who wants to know, FreeNAS 8 was released earlier this week! biggrin.gif

I don't think that replacing a single drive at once will increase the size of the vdev though - you end up with the same array as before, just with larger drives until you replace the last disk (although you don't actually need to replace all the drives at once, you would need a full rebuild each time you replace any drive, so you might as well replace them all at once if possible).
post #24 of 28
Quote:
Originally Posted by the_beast;13380067 
I don't think that replacing a single drive at once will increase the size of the vdev though - you end up with the same array as before, just with larger drives until you replace the last disk (although you don't actually need to replace all the drives at once, you would need a full rebuild each time you replace any drive, so you might as well replace them all at once if possible).

You know, that's right, I didn't think about that. I've doing a lot of research into ZFS, unRAID, traditional RAID 5, etc., and plenty of NAS software like Openfiler, FreeNAS, Nexentastor, Amahi, so I guess I may have mixed ZFS and unRAID a little bit there. I apologize.

On another note, anybody as excited as I am about FreeNAS 8 and their new projected timeline for 8.1? biggrin.gif
post #25 of 28
Quote:
On another note, anybody as excited as I am about FreeNAS 8 and their new projected timeline for 8.1?

Actually I was just suggestion in another thread to "download version 7 stable of freenas" like three days ago (or whatever). So I didn't actually notice this. I also am now quite excited for this "official" release providing support for ZFS.

I think this deserves it's own thread. Unless the Linux forum already has it covered...
 
VM Server
(17 items)
 
 
CPUGraphicsRAMHard Drive
Intel Ivy Bridge Core i7-3630QM nVidia GeForce GTX 680M 16GB DDR3 1600MHz Dual Channel Memory (2 SODIMMS) Hard Drive: Serial-ATA II 3GB/s 
Hard DriveOSMonitorPower
Hard Drive: Serial-ATA II 3GB/s Windows 10 Pro x64 17.3" FHD 16:9 (1920x1080) Battery: Smart Li-ion Battery (8-Cell) 
Audio
Sound Blaster Compatible 3D Audio 
CPUMotherboardGraphicsRAM
Intel Core i7 860 Biostar T5 XE Radeon HD 5870 Corsair 16GB  
Hard DriveHard DriveOptical DriveOS
Western Digital hard drive wd1001fals-00e8b0 Maxtor 300GB I don't need no stinking optical drive Microsoft Windows 7 Ultimate x64 
MonitorMonitorKeyboardPower
HP ZR24w 24'' Samsung SyncMaster 24" logitech wireless k360 Seventeam ST-850ZAF 850W ATX 
CaseMouseAudioAudio
Thermaltake V9 Black Edition Logitech G500 Programmable Gaming Mouse FiiO E7 USB DAC and Portable Headphone Amplifier Sennheiser HD555 Professional Headphones 
  hide details  
Reply
 
VM Server
(17 items)
 
 
CPUGraphicsRAMHard Drive
Intel Ivy Bridge Core i7-3630QM nVidia GeForce GTX 680M 16GB DDR3 1600MHz Dual Channel Memory (2 SODIMMS) Hard Drive: Serial-ATA II 3GB/s 
Hard DriveOSMonitorPower
Hard Drive: Serial-ATA II 3GB/s Windows 10 Pro x64 17.3" FHD 16:9 (1920x1080) Battery: Smart Li-ion Battery (8-Cell) 
Audio
Sound Blaster Compatible 3D Audio 
CPUMotherboardGraphicsRAM
Intel Core i7 860 Biostar T5 XE Radeon HD 5870 Corsair 16GB  
Hard DriveHard DriveOptical DriveOS
Western Digital hard drive wd1001fals-00e8b0 Maxtor 300GB I don't need no stinking optical drive Microsoft Windows 7 Ultimate x64 
MonitorMonitorKeyboardPower
HP ZR24w 24'' Samsung SyncMaster 24" logitech wireless k360 Seventeam ST-850ZAF 850W ATX 
CaseMouseAudioAudio
Thermaltake V9 Black Edition Logitech G500 Programmable Gaming Mouse FiiO E7 USB DAC and Portable Headphone Amplifier Sennheiser HD555 Professional Headphones 
  hide details  
Reply
post #26 of 28
My vote would have to go for an mdraid setup. I've had one for a few years now and here's how it went:

I was a complete linux noob at the time, only installed ubuntu on a laptop once and just used firefox so.....major noob! I did about 2 weeks of research and decided to try the mdraid for my media server.

Started with 3x 1.5TB drives in a raid5 config. It was in a Pentium 4 3.0GHz box with 512MB ram in Ubuntu installed on an 8GB USB flash drive. It was a small-form-factor box so I moved it to an AMD x2 7750, 1GB ram tower. All I did was move the USB flash drive with ubuntu installed on it, move the 4-port sata card with the 1.5TB drives and powered on the machine. BOOM everything was up and running without a problem!

Ran out of space VERY quickly so I purchased another 1.5TB drive and expanded the array on the fly, no downtime, no trouble.

Ran out of space again so I added another 4-port sata card and 2 more 1.5TB drives at the same time and expanded the array, again, no downtime, no trouble.

Just for fun I tried moving the USB flash drive and the 2 SATA cards with my drives to a Pentium D 2.8GHz machine. Plugged it in, powered it up and, this time, I had to run a single command to assemble the array but it worked within a few seconds!

Recently that started to fill up again, and I read somewhere that the array could be expanded from a raid5 to raid6 on the fly so I bought 2 more 1.5TB drives, expanded the size of the array AND went from a raid5 to raid6 with ease. Because the size of the data was so large it took a little less than 2.5 days but at the end of it I had an 8-drive, 9TB raid6 array with no downtime!

Now my array has email notifications for failures (no real failures so far, just me pulling random drives for fun) and AFP shares for my Macbook and Mac Mini's TimeMachine backups (easy to setup, Mac's see it as a real AFP share!)

Within the next few weeks I will be building an mdraid array starting with 5x3TB drives, eventually expanding it to 20 drives in a Norco 4220 case. The reason why I'm gonna stick with the mdraid is because:

Its EASILY setup and managed by a linux noob (although I've learned a lot more while working with the server)
Its EASILY expandable in array size
Its EASILY expandable in raid configurations
Its EASILY moved between different hardware configurations (Amazing for semi-high-availability)
No wasting of drives (like in ZFS pools)
No downtime during expansions (degraded performance, but never down)
The metadata is stored on the drive so if my OS drive fails, I can rebuild the array relatively easily (yes, i tried this when I upgraded from Ubuntu 9 to 10). I also keep a copy of all the configuration files in my Dropbox just in case (mdadm config, samba config, email notification config, etc. just so I can get all features up and running again quickly)

Aside from being able to use different size drives I can't justify using any other raid setup considering all of the pluses that mdraid has.

I have even documented every step required to setup an mdraid array from start to finish, including step-by-step instructions on how to manage it and recover from failures. I gave it to buddy of mine who has never used linux and he was able to build his own array just by following those steps. If anyone wants a copy let me know!
post #27 of 28
Thread Starter 
Quote:
Originally Posted by buddyboy1234;14041237 
My vote would have to go for an mdraid setup. I've had one for a few years now and here's how it went:

I was a complete linux noob at the time, only installed ubuntu on a laptop once and just used firefox so.....major noob! I did about 2 weeks of research and decided to try the mdraid for my media server.

Started with 3x 1.5TB drives in a raid5 config. It was in a Pentium 4 3.0GHz box with 512MB ram in Ubuntu installed on an 8GB USB flash drive. It was a small-form-factor box so I moved it to an AMD x2 7750, 1GB ram tower. All I did was move the USB flash drive with ubuntu installed on it, move the 4-port sata card with the 1.5TB drives and powered on the machine. BOOM everything was up and running without a problem!

Ran out of space VERY quickly so I purchased another 1.5TB drive and expanded the array on the fly, no downtime, no trouble.

Ran out of space again so I added another 4-port sata card and 2 more 1.5TB drives at the same time and expanded the array, again, no downtime, no trouble.

Just for fun I tried moving the USB flash drive and the 2 SATA cards with my drives to a Pentium D 2.8GHz machine. Plugged it in, powered it up and, this time, I had to run a single command to assemble the array but it worked within a few seconds!

Recently that started to fill up again, and I read somewhere that the array could be expanded from a raid5 to raid6 on the fly so I bought 2 more 1.5TB drives, expanded the size of the array AND went from a raid5 to raid6 with ease. Because the size of the data was so large it took a little less than 2.5 days but at the end of it I had an 8-drive, 9TB raid6 array with no downtime!

Now my array has email notifications for failures (no real failures so far, just me pulling random drives for fun) and AFP shares for my Macbook and Mac Mini's TimeMachine backups (easy to setup, Mac's see it as a real AFP share!)

Within the next few weeks I will be building an mdraid array starting with 5x3TB drives, eventually expanding it to 20 drives in a Norco 4220 case. The reason why I'm gonna stick with the mdraid is because:

Its EASILY setup and managed by a linux noob (although I've learned a lot more while working with the server)
Its EASILY expandable in array size
Its EASILY expandable in raid configurations
Its EASILY moved between different hardware configurations (Amazing for semi-high-availability)
No wasting of drives (like in ZFS pools)
No downtime during expansions (degraded performance, but never down)
The metadata is stored on the drive so if my OS drive fails, I can rebuild the array relatively easily (yes, i tried this when I upgraded from Ubuntu 9 to 10). I also keep a copy of all the configuration files in my Dropbox just in case (mdadm config, samba config, email notification config, etc. just so I can get all features up and running again quickly)

Aside from being able to use different size drives I can't justify using any other raid setup considering all of the pluses that mdraid has.

I have even documented every step required to setup an mdraid array from start to finish, including step-by-step instructions on how to manage it and recover from failures. I gave it to buddy of mine who has never used linux and he was able to build his own array just by following those steps. If anyone wants a copy let me know!

I've been running a ZFS RAIDZ2 for a couple of months now using OpenIndiana b148 and Gea's Napp-it web interface. It's about as easy as they come, but again I don't have the flexibility of expanding raids like I would with mdraid.

I gave up on FlexRAID and figured mdraid or ZFS. I'm on ZFS now, but may end up switching to mdraid in the future. The thing I really like about ZFS is the checksumming and error correcting. I have a funky SATA expander in my case now that throws errors. I'm glad I have ZFS to protect and correct them as they happen.

I really appreciate all your input, and will heed your advice for the future. Thanks!
post #28 of 28
buddyboy1234

I am in the process of setting up my first RAID/ pooled storage in Ubuntu.
Would you be able to send me the details of your tutorial?
Cheers
Quote:
Originally Posted by buddyboy1234;14041237 
My vote would have to go for an mdraid setup. I've had one for a few years now and here's how it went:

I was a complete linux noob at the time, only installed ubuntu on a laptop once and just used firefox so.....major noob! I did about 2 weeks of research and decided to try the mdraid for my media server.

Started with 3x 1.5TB drives in a raid5 config. It was in a Pentium 4 3.0GHz box with 512MB ram in Ubuntu installed on an 8GB USB flash drive. It was a small-form-factor box so I moved it to an AMD x2 7750, 1GB ram tower. All I did was move the USB flash drive with ubuntu installed on it, move the 4-port sata card with the 1.5TB drives and powered on the machine. BOOM everything was up and running without a problem!

Ran out of space VERY quickly so I purchased another 1.5TB drive and expanded the array on the fly, no downtime, no trouble.

Ran out of space again so I added another 4-port sata card and 2 more 1.5TB drives at the same time and expanded the array, again, no downtime, no trouble.

Just for fun I tried moving the USB flash drive and the 2 SATA cards with my drives to a Pentium D 2.8GHz machine. Plugged it in, powered it up and, this time, I had to run a single command to assemble the array but it worked within a few seconds!

Recently that started to fill up again, and I read somewhere that the array could be expanded from a raid5 to raid6 on the fly so I bought 2 more 1.5TB drives, expanded the size of the array AND went from a raid5 to raid6 with ease. Because the size of the data was so large it took a little less than 2.5 days but at the end of it I had an 8-drive, 9TB raid6 array with no downtime!

Now my array has email notifications for failures (no real failures so far, just me pulling random drives for fun) and AFP shares for my Macbook and Mac Mini's TimeMachine backups (easy to setup, Mac's see it as a real AFP share!)

Within the next few weeks I will be building an mdraid array starting with 5x3TB drives, eventually expanding it to 20 drives in a Norco 4220 case. The reason why I'm gonna stick with the mdraid is because:

Its EASILY setup and managed by a linux noob (although I've learned a lot more while working with the server)
Its EASILY expandable in array size
Its EASILY expandable in raid configurations
Its EASILY moved between different hardware configurations (Amazing for semi-high-availability)
No wasting of drives (like in ZFS pools)
No downtime during expansions (degraded performance, but never down)
The metadata is stored on the drive so if my OS drive fails, I can rebuild the array relatively easily (yes, i tried this when I upgraded from Ubuntu 9 to 10). I also keep a copy of all the configuration files in my Dropbox just in case (mdadm config, samba config, email notification config, etc. just so I can get all features up and running again quickly)

Aside from being able to use different size drives I can't justify using any other raid setup considering all of the pluses that mdraid has.

I have even documented every step required to setup an mdraid array from start to finish, including step-by-step instructions on how to manage it and recover from failures. I gave it to buddy of mine who has never used linux and he was able to build his own array just by following those steps. If anyone wants a copy let me know!
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › mdadm vs. FlexRAID