Overclock.net › Forums › Specialty Builds › Servers › Planning out a fileserver, need opinions & suggestions.
New Posts  All Forums:Forum Nav:

Planning out a fileserver, need opinions & suggestions. - Page 2

post #11 of 18
1) They'll be fine. With 10 of them and a quad for parity calculations on writes you'll have absolutely no problem saturating gigabit both ways.

2) However you want, really. My fileserver just has a / partition on a 4 GB CF card. You could easily get away with the defaults.

3) You should get plenty of performance out of a Linux RAID array with mdadm. You also really don't need that much RAM. My 6x 750 GB Linux RAID 5 software array gets about 360 MB/sec on reads, writes are well over the 125 MB/sec limitation of gigabit. This is with a 939 4400+ and 1 GB of DDR1.

4) Again, pretty much up to you. You can mount /dev/md0 to really any folder. Mine is mounted at /fs . Depending on what services you use might dictate permissions. IIRC, Samba daemon is run by root. Other applications may depend on the user account or application account.

5) If you look through the /etc/samba/smb.conf file it will really tell you everything you need to know about setting up samba shares. By default the authentication is based on user accounts on the server, however you can specify 'guest only = yes' to have specific shares not require authentication. You could also pair this with 'read only = yes' in order to alleviate random people from deleting your stuff in these shares. IIRC you should be able to sync this with LDAP if applicable, which is also outlined in the config as comments.
Waiting on X399
(13 items)
 
  
CPUMotherboardGraphicsRAM
AMD Phenom II B57 @ X4 3.9 Gigabyte 790FXTA-UD5 Sapphire Radeon 290 8 GB G.Skill 2133 
Hard DriveCoolingOSKeyboard
250 GB 840 EVO Noctua NH-D14 Windows 10 Logitech K350 
PowerCaseMouseMouse Pad
Seasonic x750 Corsair 600T Logitech G100s Razer Goliathus Speed 
Audio
Plantronics Gamecom 788 
  hide details  
Reply
Waiting on X399
(13 items)
 
  
CPUMotherboardGraphicsRAM
AMD Phenom II B57 @ X4 3.9 Gigabyte 790FXTA-UD5 Sapphire Radeon 290 8 GB G.Skill 2133 
Hard DriveCoolingOSKeyboard
250 GB 840 EVO Noctua NH-D14 Windows 10 Logitech K350 
PowerCaseMouseMouse Pad
Seasonic x750 Corsair 600T Logitech G100s Razer Goliathus Speed 
Audio
Plantronics Gamecom 788 
  hide details  
Reply
post #12 of 18
What's it for? Can't help much til we know that...

If it's for home media then you'll never notice the difference between 5400 and 7200 rpm drives. If it's for 100 people at work running heavy DB apps then 8 drives won't be enough regardless of the spindle speed.

One more thing - a properly set up software RAID6 array on a low end dual core (or even single core) modern CPU will eat a PERC for breakfast - providing your controller cards are fast enough that is. You don't need a monster quad to provide your RAID parity calcs.
post #13 of 18
Thread Starter 
Mainly for a home server. I am getting the quad mainly because I also do a lot of VM Work, and having a dual core with 4 virtual machines open at once can sortave slow things down, same reason why I have 8 GB of RAM.
Gaming
(23 items)
 
  
CPUMotherboardGraphicsGraphics
AMD Phenom II X6 1090T Crosshair IV Formula GTX 560 GTX 580 
RAMRAMRAMRAM
G. Skill F3-12800CL6D-4GBPI G. Skill F3-12800CL6D-4GBPI G. Skill F3-12800CL6D-4GBPI  G. Skill F3-12800CL6D-4GBPI 
Hard DriveHard DriveHard DriveOptical Drive
Seagate Hard Drive Seagate Hard Drive Crucial M4 SSD Sony Optiarc 
CoolingOSMonitorMonitor
Corsair H70 Windows 7 Professional x64 ASUS VH242H 23" Monitor ASUS VH242H 23" Monitor 
MonitorKeyboardPowerCase
Samsung SyncMaster 906BW 19" Monitor Logitech G15 Corsair 1K PSU Lian-Li 70A 
MouseMouse PadAudio
Logitech Performance MX Razer Vespula HT Omega Pro+ 
  hide details  
Reply
Gaming
(23 items)
 
  
CPUMotherboardGraphicsGraphics
AMD Phenom II X6 1090T Crosshair IV Formula GTX 560 GTX 580 
RAMRAMRAMRAM
G. Skill F3-12800CL6D-4GBPI G. Skill F3-12800CL6D-4GBPI G. Skill F3-12800CL6D-4GBPI  G. Skill F3-12800CL6D-4GBPI 
Hard DriveHard DriveHard DriveOptical Drive
Seagate Hard Drive Seagate Hard Drive Crucial M4 SSD Sony Optiarc 
CoolingOSMonitorMonitor
Corsair H70 Windows 7 Professional x64 ASUS VH242H 23" Monitor ASUS VH242H 23" Monitor 
MonitorKeyboardPowerCase
Samsung SyncMaster 906BW 19" Monitor Logitech G15 Corsair 1K PSU Lian-Li 70A 
MouseMouse PadAudio
Logitech Performance MX Razer Vespula HT Omega Pro+ 
  hide details  
Reply
post #14 of 18
Quote:
Originally Posted by GH0 View Post
Mainly for a home server. I am getting the quad mainly because I also do a lot of VM Work, and having a dual core with 4 virtual machines open at once can sortave slow things down, same reason why I have 8 GB of RAM.
I'm really surprised there haven't been more comments on this.

With 4 VM's I can see now why disk I/O is such a concern. In that case, I'd go with the 7200 RPM drives. 4 VM's plus file sharing can = if you're not careful. I would probably have one of the VM's run off the boot RAID array, and the other VM's and flie sharing off the RAID 6 array. Depends on how the VM's are set up and used and how heavy the expected file sharing will be.
Big Baby
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core 2 Duo E8400 Wolfdale 3.0 Ghz Asrock P45R2000 WiFi 460 GTX 6GB Patriot Viper DDR3 1333 (2x1GB and 2x2GB) 
Hard DriveOptical DriveOSMonitor
Kingston 64GB SSD, 1TB Hitachi and 3 160GB drives ASUS dual layer SATA II DVD burner Windows 7 Pro 64 Bit and a few virtual machines ;) Hanns.G Hi221D 22" LCD Widescreen 
KeyboardPowerCaseMouse
standard junker nothin' fancy. Yet... HIPER 730 W Rosewill Conqueror Logitech Trackball (TrackMan is the model I think) 
Mouse Pad
The Desk! 
  hide details  
Reply
Big Baby
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core 2 Duo E8400 Wolfdale 3.0 Ghz Asrock P45R2000 WiFi 460 GTX 6GB Patriot Viper DDR3 1333 (2x1GB and 2x2GB) 
Hard DriveOptical DriveOSMonitor
Kingston 64GB SSD, 1TB Hitachi and 3 160GB drives ASUS dual layer SATA II DVD burner Windows 7 Pro 64 Bit and a few virtual machines ;) Hanns.G Hi221D 22" LCD Widescreen 
KeyboardPowerCaseMouse
standard junker nothin' fancy. Yet... HIPER 730 W Rosewill Conqueror Logitech Trackball (TrackMan is the model I think) 
Mouse Pad
The Desk! 
  hide details  
Reply
post #15 of 18
depends what the VMs are for.

And you may get better performance if you leave ALL the VMs on the OS RAID1 array rather than trying to run off a RAID6 array, especially if there are small writes involved.
post #16 of 18
Unless you're simultaneously booting/shutting down all four and using significant swap on them, I don't see spindle speed causing any issue for you.

Also, why do you need a RAID array for a couple of Virtual machines? I've never used more than 50GB on my server ... and that was running 5 or 6 servers and 3 or 4 desktops on it for testing. I ran out of memory, yes, but never ran out of speed.

If you go with a "real" RAID card - which might be out of your price range, guess that's neither here nor there - you would be offloading the read/write to the controller and that would save you a lot on CPU.

Plus "real" RAID cards are ususally a bit more fault tolerant and have batteries on them to save data before powering down in the event that the computer/UPS loses power.
    
CPUMotherboardGraphicsRAM
E6300 @ 2.3 GHz Foxconn Intel x3100 4.5 Rendition 
Hard DriveOptical DriveOSMonitor
160+500 DVDRW Server 08 x64 Princeton 17'' square 
KeyboardPowerCaseMouse
Unicomp Germanic Model M 250W Dell Vostro 200 Gateway Ball Mouse 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
E6300 @ 2.3 GHz Foxconn Intel x3100 4.5 Rendition 
Hard DriveOptical DriveOSMonitor
160+500 DVDRW Server 08 x64 Princeton 17'' square 
KeyboardPowerCaseMouse
Unicomp Germanic Model M 250W Dell Vostro 200 Gateway Ball Mouse 
  hide details  
Reply
post #17 of 18
Quote:
Originally Posted by the_beast View Post
depends what the VMs are for.

And you may get better performance if you leave ALL the VMs on the OS RAID1 array rather than trying to run off a RAID6 array, especially if there are small writes involved.
I agree. Ideally I'd run the VMs off a RAID10 array, like I do on my current machine. If you're short on physical space, you might consider using 4 Western Digital Scorpio Black 2.5" drives in RAID 10 for your OS & VMs.

Just to add my 2 cents to your OP: since this is for a home file/media server 2TB storage disks will be fine - media files tend to be about sequential access, and 2TB 5400RPM drives are very good at that.

You don't need a PERC 6/i for your set up, Linux's MD RAID with a dual core processor is more than enough, and with a decent level of caching and a 30 minute UPS, you'll be set.

For administration, especially since you had a question on SMB, install Webmin. It'll make life much much easier.
Edited by parityboy - 4/13/11 at 12:01pm
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
Ryzen
(12 items)
 
  
CPUMotherboardGraphicsRAM
Ryzen 7 1700 Gigabyte GA-AB350M Gaming 3 Palit GT-430 Corsair Vengeance LPX CMK16GX4M2B3000C15 
Hard DriveCoolingOSMonitor
Samsung 850 EVO AMD Wraith Spire Linux Mint 18.x Dell UltraSharp U2414H 
KeyboardPowerCaseMouse
Apple Basic Keyboard Thermaltake ToughPower 850W Lian-Li PC-A04B Logitech Trackman Wheel 
  hide details  
Reply
post #18 of 18
Quote:
Originally Posted by GH0 View Post
Alright, so, I am going to be using Linux, the problem is two things. There are two threads, mainly because I thought the Linux portion was enough to require it copied in the Linux section.

What I have planned -
Get 2 Drives to put in Raid1 (both of which will be in a icy dock hot swap bay (mainly for easy access)
Get 10 2TB drives that will be in a Linux Software RAID 6 (or on a PERC6 RAID card, assume software RAID for now).

Now some of the main questions I have,

1) Will 5400 RPM drives be enough or will the read/write speeds be terrible? I have heard two different opinions on this, basically one said that it would be fine, another said it would suck. I am adament with 2 TB drives, so, if that is a problem I will need to find 7400 RPM drives that work.

2) How should I setup the OS Drive for this to be ideal? Where should the /etc, /boot, /swap, /var, /tmp, /home, etc stuff go, and in what order should it go?

3) Would getting a PERC6 Card or just going with Software RAID and a decent quad core CPU be better (I already have 8 GB of RAM), I have a PERC5 card, but, I have been told PERC6 would be better for what I am going for, and I would rather have 2 fail points, instead of just one.

4) In what directory should I put the RAID Array's file location? Specifically for ease of use (for multiple users/or just myself if people fail to use it) Should I just put it in /srv ?

5) How would I be able to initiate configuration files in smb that require me to login and authenticate to a user to be able to access certain folders/files? (Its been quite some time since I had to mess with a smb configuration)
a little late to this thread, but here are some of my opinions:

1) it all depends on what your requirement is... for example, if the storage is mostly about size (quantity, how many TB you have) and not so much about performance, then you should be fine. especially if any I/O is also going to be network bound... say you are going to only use 1Gbps ethernet and everything you do is going to go through that network interface.. well, anything above 125MB/sec is pointless since 1Gbps maxes out around that figure... so as long as your disk sub-system can handle 125MB/sec, you're good to go. you can see some of my benchmark links in my sig and you'll see that 125MB/sec is nothing with the number of drives you'll be using.

on the other hand, if you're going to be doing a lot of "local" disk I/O and want to be able to do things really fast, then you have to take into account performance concerns. some rules of thumb:

I. higher RPM usually means lower access times which improves random I/O
II. in RAID levels that use parity (RAID-4/5/6/50/60) the performance curve relative to number of spindles has a peak and usually looks somewhat like a bell curve. where this peak is depends on a lot of factors like the performance of your HW RAID controller, your CPU for software RAID, etc. the only way to know where your peak will be is to test out various configurations and run some benchmarks. i personally like to use iozone and bonnie++. even if you find your peak configuration of spindles is less than the 10 drives you're using, you can work around some of it by splitting the array and using configurations like RAID-50/60 which might allow for better performance. again, you should benchmark to know for sure.
III. smaller stripe size usually benefits random I/O patterns where as larger stripe size benefits sequential I/O patterns. you need to figure out what you need and decide how to balance this.
IV. your block device read-ahead buffer can have a huge effect on performance so make sure you tune that. there are other parameters that you can tune to gain 5%-10% performance, but the read-ahead is the biggest parameter in most cases I've dealt with... we're talking about 20%-30% differences...

2) I don't know what you're doing with your server... but I wouldn't split out too many partitions if you don't have a good reason to do so. on modern systems it mostly just results in inefficient use of space. I would normally have just:

/
/var
/tmp
/boot (maybe.. sometimes this is broken out anyway by the installer)

if your server is going to have a lot of users logging in and storing stuff in their home directories, i would also split out /home, or link that to your disk array area.

3) I agree the PERC6 is the preferred choice over PERC5, mainly because of RAID-6 capability since you are using very large drives which have a higher chance of URE incidents. There is much debate over HW RAID vs SW RAID... i personally prefer HW RAID because I think it is easier to deal with when something goes wrong. With SW RAID, you usually have to type a few management commands in order to pull a dead drive, insert a new drive, rebuild array, etc. It's not difficult, but when you're in a "bad" situation, I normally don't want to burden myself to remember a bunch of commands I only use once a year or so. As for performance, I think as long as you setup correctly, you should be fine... this is assuming you're not trying to build a 1GB/sec I/O subsystem.

4) i don't think it really matters.... in my practice, i always recommend my clients to create a /data or /data1 or something like that and mount the array at those points. to simplify and avoid confusing other system admins, I then use 'bind mounts' to connect directories elsewhere to /data. E.g., if I want all my LDAP database files under /data, i might have a bind mount like:

/var/lib/ldap -> /data/var/lib/ldap

you can also use symlinks, but i find the bind mounts a little cleaner and avoid some issues with symlinks.

5) i don't really understand the question here... but generally speaking, i would suggest you consider using LDAP directory for account information and setup NSS/PAM/Samba to use the LDAP ... this sort of "unifies" your account management... something to consider at least.

Oh, and although you never asked the question, here's another suggestion... definitely consider using LVM (LVM2) for your array with an expandable file system like ext3 or xfs. LVM provides a few features that you should find very useful when you have such a large array:

I) snapshot capability - just make sure you reserve some space to hold the snapshot data... i usually reserve at least 10%, but if you're going to have a lot of activity, you might consider more. when you have a large data set, trying to get a consistent backup of that data is going to take a long time and you need to "freeze" the data for that... snapshots allow you to do this. (oh, and you are going to do backups right? ... RAID is no substitute for backups; it's just an early warning system)

II) expandability - if you ever expand in the future, LVM will make life easier to expand your disk array and you can usually do these operations online without ever shutting down or rebooting the server. use a decent file system too... i normally use xfs where i can.... creating a huge file system with ext3 is rather painful...
Edited by BLinux - 4/23/11 at 4:27am
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › Planning out a fileserver, need opinions & suggestions.