Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › Better File System Design For File Server?
New Posts  All Forums:Forum Nav:

Better File System Design For File Server?

post #1 of 24
Thread Starter 
I have a server at home running Debian Squeeze, which acts as a file server in addition to other roles. Right now, it's simply pushing out Samba shares that are on a 3x1TB RAID 5 using mdadm RAID, formatted to ext4.

My question is this - I'm looking to buy a new drive due to low free space, and I'm seeing 2TB drives for ~$120 (now these are consumer drives, so they're only "rated" for RAID 0 and 1. I know they can easily do 5, but 5 is a bit more intensive than 0 or 1 and might cause an early failure in consumer grade drives (happened twice to me already with the current RAID 5)). So is there a better solution than to buy another 1TB drive and expand the array that would involve the 2TB drive and still using the 1TB drives currently in use? Is there a better software solution to mdadm and RAID 5? Should I use something besides ext4?
    
CPUMotherboardGraphicsRAM
Core i7 970 @ 4.0 GHz 1.22 Vcore Asus Rampage II Gene GTX 260 216SP G.SKILL PI 3x2gb DDR3 1600 @ 7-8-7-24 
Hard DriveOSMonitorPower
2x 500gb Seagates RAID 0, 1x 500gb non-RAID Windows 7 Professional x64 ASUS 24'' VH242H / Spectre 24'' WS Corsair 750TX 
Case
Corsair 300R 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 @ 4.0 GHz 1.22 Vcore Asus Rampage II Gene GTX 260 216SP G.SKILL PI 3x2gb DDR3 1600 @ 7-8-7-24 
Hard DriveOSMonitorPower
2x 500gb Seagates RAID 0, 1x 500gb non-RAID Windows 7 Professional x64 ASUS 24'' VH242H / Spectre 24'' WS Corsair 750TX 
Case
Corsair 300R 
  hide details  
Reply
post #2 of 24
Really EXT4 is good right now. I don't think of anything more powerful except EXT2 is a little faster but not as reliable and EXT3 is just a little slower. Btrfs is up and coming but still not fast even with the newer 3.2/3.3 kernel branches.

I would say sticking with a higher capacity single drive will still be simpler, use less power and be less complex to maintain than a raid 0/1/5/1+0/6. Less chance of failure etc... if anything I would say just have a back up that keeps a copy of everything once a week or something.
Intel build
(17 items)
 
  
CPUMotherboardGraphicsRAM
i7 860 gigabyte p55-ud6 gigabyte gv-n560oc-1gi Corsair Vengeance CMZ8GX3M2A1600C9 
Hard DriveHard DriveHard DriveOptical Drive
Crucial M4 WD Caviar Black WD Caviar Black LiteOn Lightscribe 24x 
CoolingOSMonitorMonitor
Thermaltake Frio Extreme CLP0587 Arch Linux x86_64 samsung 2243swx ASUS vs-248H-p 
KeyboardPowerCaseMouse
moditek led flex Seasonic 860Watt Platinum Antec Lanboy air razor death adder 
  hide details  
Reply
Intel build
(17 items)
 
  
CPUMotherboardGraphicsRAM
i7 860 gigabyte p55-ud6 gigabyte gv-n560oc-1gi Corsair Vengeance CMZ8GX3M2A1600C9 
Hard DriveHard DriveHard DriveOptical Drive
Crucial M4 WD Caviar Black WD Caviar Black LiteOn Lightscribe 24x 
CoolingOSMonitorMonitor
Thermaltake Frio Extreme CLP0587 Arch Linux x86_64 samsung 2243swx ASUS vs-248H-p 
KeyboardPowerCaseMouse
moditek led flex Seasonic 860Watt Platinum Antec Lanboy air razor death adder 
  hide details  
Reply
post #3 of 24
Ever thought about ZFS or XFS? I'm not sure on ZFS's working state in Linux but I thought XFS was the one people used for servers or file back up setups.
     
CPUGraphicsRAMHard Drive
Intel Core m3-6Y30 Intel HD515 8GB 1866DDR3L Micron M600 MTFDDAV256MBF M.2, 256 GB 
CoolingOSOSMonitor
Fanless Win10 Home x64 Kubuntu 16.04 (requires Linux kernel 4.5/4.6) 13.3 inch 16:9, 1920x1080 pixel, AU Optronics A... 
CPUMotherboardGraphicsRAM
AthlonIIX4 640 3.62GHz (250x14.5) 2.5GHz NB Asus M4A785TD-M EVO MSI GTX275 (Stock 666) 8GBs of GSkill 1600 
RAMHard DriveHard DriveHard Drive
4GBs of Adata 1333 Kingston HyperX 3k 120GB WD Caviar Black 500GB Hitachi Deskstar 1TB 
Optical DriveCoolingOSOS
LG 8X BDR (WHL08S20) Cooler Master Hyper 212+ Kubuntu x64 Windows 7 x64 
OSMonitorPowerCase
Bodhi Linux x64 Acer G215H (1920x1080) Seasonic 520 HAF912 
CPUMotherboardGraphicsRAM
N450 1.8GHz AC and 1.66GHz batt ASUS proprietary for 1001P GMA3150 (can play bluray now!?) 1GB DDR2 
Hard DriveOptical DriveOSOS
160GB LGLHDLBDRE32X Bodhi Linux Fedora LXDE 
OSOSMonitorKeyboard
Kubuntu SLAX 1280x600 + Dell 15inch Excellent! 
PowerCase
6 cells=6-12hrs and a charger 1001P MU17 Black 
  hide details  
Reply
     
CPUGraphicsRAMHard Drive
Intel Core m3-6Y30 Intel HD515 8GB 1866DDR3L Micron M600 MTFDDAV256MBF M.2, 256 GB 
CoolingOSOSMonitor
Fanless Win10 Home x64 Kubuntu 16.04 (requires Linux kernel 4.5/4.6) 13.3 inch 16:9, 1920x1080 pixel, AU Optronics A... 
CPUMotherboardGraphicsRAM
AthlonIIX4 640 3.62GHz (250x14.5) 2.5GHz NB Asus M4A785TD-M EVO MSI GTX275 (Stock 666) 8GBs of GSkill 1600 
RAMHard DriveHard DriveHard Drive
4GBs of Adata 1333 Kingston HyperX 3k 120GB WD Caviar Black 500GB Hitachi Deskstar 1TB 
Optical DriveCoolingOSOS
LG 8X BDR (WHL08S20) Cooler Master Hyper 212+ Kubuntu x64 Windows 7 x64 
OSMonitorPowerCase
Bodhi Linux x64 Acer G215H (1920x1080) Seasonic 520 HAF912 
CPUMotherboardGraphicsRAM
N450 1.8GHz AC and 1.66GHz batt ASUS proprietary for 1001P GMA3150 (can play bluray now!?) 1GB DDR2 
Hard DriveOptical DriveOSOS
160GB LGLHDLBDRE32X Bodhi Linux Fedora LXDE 
OSOSMonitorKeyboard
Kubuntu SLAX 1280x600 + Dell 15inch Excellent! 
PowerCase
6 cells=6-12hrs and a charger 1001P MU17 Black 
  hide details  
Reply
post #4 of 24
mdadm is a robust and pretty easy way to raid. That said, I'm not a fan of raid as it can be complicated dealing with failed diskes etc.

I like to just have 2 drives, and use a shell script to keep an up to date "clone" of the second drive. Can be easily scaled to more drives too.

I have used IBM's JFS for years and never had any trouble with it. Not sure how it compares to others but most regard it as dependable and good performance.

If you google for it, you might find a test of the various filesystems online.....
Goliath
(13 items)
 
fBSD
(10 items)
 
pfSense Box
(11 items)
 
CPUMotherboardGraphicsRAM
i7 4770 Gigabyte z87x-ud3h Intel HD4600 G.Skill Ripjaws X 
Hard DriveOptical DriveCoolingOS
Crucial M500 Pioneer Blu-ray Burner Swiftech Polaris Linux Mint 
MonitorKeyboardPowerCase
Korean 1440p WASD 104 key v2 - mx Blue switches Corsair HX-650 v2 Lian Li pc-a05n 
Mouse
Logitech G600 
CPUMotherboardGraphicsRAM
Intel Core i7 2700k Asus z68 Deluxe Gen 3 Gt 610 8 Gb Samsung Magic Memory 
Hard DriveHard DriveOSPower
WD Green WD Green FreeBSD 10 OCZ ModXtreme Pro 500 
CPUMotherboardGraphicsRAM
Intel Celeron G1620 Asus z68 Deluxe 9500 GT Corsair Value Ram 
Hard DriveOSMonitorPower
OCZ Agility SSD pfSense / FreeBSD SSH via LAN Antec 520w Gamer 
OtherOtherOther
Intel PCIe Dual Gigabit LAN card. D-Link Gigabit 8 port Switch Apple Airport Extreme N - for wireless access. 
  hide details  
Reply
Goliath
(13 items)
 
fBSD
(10 items)
 
pfSense Box
(11 items)
 
CPUMotherboardGraphicsRAM
i7 4770 Gigabyte z87x-ud3h Intel HD4600 G.Skill Ripjaws X 
Hard DriveOptical DriveCoolingOS
Crucial M500 Pioneer Blu-ray Burner Swiftech Polaris Linux Mint 
MonitorKeyboardPowerCase
Korean 1440p WASD 104 key v2 - mx Blue switches Corsair HX-650 v2 Lian Li pc-a05n 
Mouse
Logitech G600 
CPUMotherboardGraphicsRAM
Intel Core i7 2700k Asus z68 Deluxe Gen 3 Gt 610 8 Gb Samsung Magic Memory 
Hard DriveHard DriveOSPower
WD Green WD Green FreeBSD 10 OCZ ModXtreme Pro 500 
CPUMotherboardGraphicsRAM
Intel Celeron G1620 Asus z68 Deluxe 9500 GT Corsair Value Ram 
Hard DriveOSMonitorPower
OCZ Agility SSD pfSense / FreeBSD SSH via LAN Antec 520w Gamer 
OtherOtherOther
Intel PCIe Dual Gigabit LAN card. D-Link Gigabit 8 port Switch Apple Airport Extreme N - for wireless access. 
  hide details  
Reply
post #5 of 24
Thread Starter 
Quote:
Originally Posted by Rookie1337 View Post

Ever thought about ZFS or XFS? I'm not sure on ZFS's working state in Linux but I thought XFS was the one people used for servers or file back up setups.

From what I understand of ZFS (which, could very well be completely wrong) it is 1) more suited for use with an SSD for a fast-caching type of feature and 2) In and of itself performs RAID functions, though not in the traditional sense of RAID. Am I correct here? Does it support different drive sizes?


As far as XFS goes - I've never so much as touched it. Can you provide a quick plain-English explanation?




Thanks to everyone so far. thumb.gif
    
CPUMotherboardGraphicsRAM
Core i7 970 @ 4.0 GHz 1.22 Vcore Asus Rampage II Gene GTX 260 216SP G.SKILL PI 3x2gb DDR3 1600 @ 7-8-7-24 
Hard DriveOSMonitorPower
2x 500gb Seagates RAID 0, 1x 500gb non-RAID Windows 7 Professional x64 ASUS 24'' VH242H / Spectre 24'' WS Corsair 750TX 
Case
Corsair 300R 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 @ 4.0 GHz 1.22 Vcore Asus Rampage II Gene GTX 260 216SP G.SKILL PI 3x2gb DDR3 1600 @ 7-8-7-24 
Hard DriveOSMonitorPower
2x 500gb Seagates RAID 0, 1x 500gb non-RAID Windows 7 Professional x64 ASUS 24'' VH242H / Spectre 24'' WS Corsair 750TX 
Case
Corsair 300R 
  hide details  
Reply
post #6 of 24
My suggestion would be to buy two 2TB drives and create a software raid10, sda missing sdb missing. raid10 is the way to do it for sure thumb.gif It's the best combination of speed, space, and reliability. We have a huge investment in raid10 at work across the board. smile.gif Software raid is my preference http://linux.yyz.us/why-software-raid.html .. the proprietary card thing can be problem, what if one of the companies involved disappears wink.gif But hardware raid will usually perform a little better if your machine isn't the best.. in general the raid10 module is fine for a home user.

anyway, later on ... you accumulate some cash, buy two more drives, add them to the array and now you have mirroring, which is very important... now if you lose a drive (or even the right pair), no sweat. sda sdc sdb sdd

as far as a file system .. eh it doesn't really matter. If you have a ton of space, use lvm. It's very easy to make smallerish LVs and try / switch / merge to different filesystems. Really I would just use ext4 and forget about it. ZFS is neat but it doesn't really exist over here. I don't know anything about XFS except when I first heard of it many years ago I remember reading "ext3 is better, not much reason to bother with XFS". Someone please correct me if wrong. Don't use reiserfs or any other file system that is not tried and true.

And there's no point to buying a 2TB drive if you're throwing it into an array of all 1TB drives - it will fall to the lowest common denominator. You could buy one 1TB drive, rip one of the drives out of your raid5, set those 2 up in a new raid 10 as I said above, COPY your data from the degraded raid 5 to a new LV on the new raid10, delete the raid5, and move those two drives over into the raid10. You'd have more space, the same redundancy (once finished.. wink.gif ), and better speed as it's just striping and not striping + parity calculations.. really that would be pretty easy to do and doesn't carry a whole lot of risk if you plan it right and do it carefully .. and you don't get a drive failure in between deleting the raid5 and the resync for the mirror set finishes.

When you buy a drive run the long smartmon test several times to give it a good burn in and weed out any infant mortality before you bet everything you have on it.
Quote:
That said, I'm not a fan of raid as it can be complicated dealing with failed diskes etc.

That's a pretty poor cop out. It takes ... one command ... to recover from a drive failure and you're still online while it's failed. ... vs in a bunker scrambling doing .. whatever you do instead. It's an industry standard for a good reason smile.gif
Edited by lloyd mcclendon - 4/11/12 at 11:05pm
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
stable again
(25 items)
 
  
CPUCPUMotherboardGraphics
E5-2687W E5-2687W ASUS Z9PED8-WS EVGA GTX 570 (Linux host) 
GraphicsRAMHard DriveHard Drive
EVGA GTX 970 FTW (win7 guest) 64GB G.SKILL 2133 2x Crucial M4 256GB raid1 4x 3TB raid 10 
CoolingCoolingCoolingCooling
2x Apogee HD  2x RX 480 2x MCP 655 RP-452x2 rev2 (new) 
CoolingCoolingOSOS
16x Cougar Turbine CFT12SB4 (new) EK FC 580 Gentoo (host) Gentoo (x23 guests) 
OSMonitorMonitorPower
windows 7 (guest w/ vfio-pci) Viewsonic 23" 1080P Viewsonic 19" Antec HCP Platinum 1000 (new) 
CaseOtherOther
Case Labs TH10 (still the best ever) 2x Lamptron FC-5 IOGEAR 2 way DVI KVM Switch 
  hide details  
Reply
post #7 of 24
Quote:
Originally Posted by TurboTurtle View Post

I have a server at home running Debian Squeeze, which acts as a file server in addition to other roles. Right now, it's simply pushing out Samba shares that are on a 3x1TB RAID 5 using mdadm RAID, formatted to ext4.
My question is this - I'm looking to buy a new drive due to low free space, and I'm seeing 2TB drives for ~$120 (now these are consumer drives, so they're only "rated" for RAID 0 and 1. I know they can easily do 5, but 5 is a bit more intensive than 0 or 1 and might cause an early failure in consumer grade drives (happened twice to me already with the current RAID 5)). So is there a better solution than to buy another 1TB drive and expand the array that would involve the 2TB drive and still using the 1TB drives currently in use? Is there a better software solution to mdadm and RAID 5? Should I use something besides ext4?
ZFS would be perfect for this.
Quote:
Originally Posted by TurboTurtle View Post

From what I understand of ZFS (which, could very well be completely wrong) it is 1) more suited for use with an SSD for a fast-caching type of feature and 2) In and of itself performs RAID functions, though not in the traditional sense of RAID. Am I correct here? Does it support different drive sizes?
As far as XFS goes - I've never so much as touched it. Can you provide a quick plain-English explanation?
Thanks to everyone so far. thumb.gif
ZFS is designed for all types of drives - including consumer grade hardware.

There's so many cool features in ZFS that I couldn't possibly go into them all in here, but it's essentially the next generation of file systems:
  • you can have a multitude of drives and arrays in one storage pool - so mix and matching HDDs is easy and expanding your storage pool is even easier (ie never run out of storage space again!)
  • it is it's own software RAID (raidz1, raidz2 and raidz3) which actually supports some data security (in terms of robustness) measures that even hardware RAID controllers do not support.
  • it supports multiple compressions types, deduplication, intelligent copying (which works very similarly to deduping) and so on.
  • it has a number of advanced file and fs recovery tools (I once managed to break my ZFS storage pool when a dodgy raid controller randomly dropped disks mid-write and then caused a kernel panic. So started the box up again and imported the fs from the last safe write - which was 5 minutes before the crash. Then everything popped up perfectly again. So no more superblock (et al) faults.
  • expanding further on the above point, it supports snapshots (which work a lot like how snapshots work in virtual machines, only more flexible)
  • it's CLI tools are also stupidly simple to use (Sun were very good at creating powerful yet easy to use tools)

I will admit I'm somewhat a ZFS fanboy. But in all honesty, it's not without good reason. I've lost count of the number of times I've done something incredibly stupid and ZFS has saved me.
Edited by Plan9 - 4/12/12 at 1:49am
post #8 of 24
Didn't linux just get full ZFS support recently? I remember reading that it was recently added. I think we now have a native kernel module, no? Would be worth a try, I used to use something outside of EXT for servers when I started Debian but I forgot what it was. =(
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
post #9 of 24
Quote:
Originally Posted by mushroomboy View Post

Didn't linux just get full ZFS support recently? I remember reading that it was recently added. I think we now have a native kernel module, no? Would be worth a try, I used to use something outside of EXT for servers when I started Debian but I forgot what it was. =(

There are licensing issues with porting ZFS to the Linux kernel (ZFS is CDDL, Linux is -obviously- GPL) - which is why all the Linux ZFS ports thus far have been either running in FUSE or required the user to manually patch their own kernel.

If there's now a native Linux ZFS driver, I've not read about it (please link the article as I'd love to know smile.gif ) but it would have to be a complete rewrite of the drivers (original code, thus not Sun/Oracle licensed CDDL code)
post #10 of 24
Quote:
Originally Posted by Plan9 View Post

There are licensing issues with porting ZFS to the Linux kernel (ZFS is CDDL, Linux is -obviously- GPL) - which is why all the Linux ZFS ports thus far have been either running in FUSE or required the user to manually patch their own kernel.
If there's now a native Linux ZFS driver, I've not read about it (please link the article as I'd love to know smile.gif ) but it would have to be a complete rewrite of the drivers (original code, thus not Sun/Oracle licensed CDDL code)

Oh, I was talking about kernel patching. I know we have working kernel patch sets to use ZFS natively, haven't used them though. I have no need really for ZFS,
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Linux, Unix
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › Better File System Design For File Server?