Overclock.net › Forums › Specialty Builds › Servers › What os for Home Server
New Posts  All Forums:Forum Nav:

What os for Home Server - Page 2

post #11 of 27
Quote:
Originally Posted by jrl1357 View Post

I beg to differ. ZFS is self-healing, has easy snapshots, rollbacks in case you screw up [that also take up less space then full backups (most of the time)]easy movement of zpools, raid-z, AND is faster overall. not even beginning to mention the other benifits of unix/unix-like oses. but if you want to pay $50 or more for something not as good as something thats %100 free, be my guest

The main reason I don't like ZFS, is the fact that expanding an array is not straightforward and easy. I prefer real hardware raid, but we are not talking about any raid here so raid-z is not relevant. NTFS, when using VSS can just as easily provide snapshots.

Isn't the self-healing function of ZFS only for mirrored arrays? I don't agree with the generalized "faster overall" statement.
post #12 of 27
depends how you do raid. If you do a single zpool with raidz its very easy to add drives to the pool. Also, raidz gets around the raid5 write hole
post #13 of 27
Quote:
Originally Posted by jrl1357 View Post

depends how you do raid. If you do a single zpool with raidz its very easy to add drives to the pool. Also, raidz gets around the raid5 write hole

The Raid 5 Write Hole is a thing of the past, my friend. A recent (in-warranty) quality controller (Dell, HP, LSI, Adaptec, etc) is not going to have that problem...and honestly, who still uses Raid 5 in a production environment? You should not be using Raid 5 for an array over ~8TB period, and in most cases Raid 5 has been replaced with Raid 10.

ZFS also can't do nested raids, can it? (Raid 60 for example)
post #14 of 27
Quote:
Originally Posted by tycoonbob View Post

Quote:
Originally Posted by jrl1357 View Post

depends how you do raid. If you do a single zpool with raidz its very easy to add drives to the pool. Also, raidz gets around the raid5 write hole

The Raid 5 Write Hole is a thing of the past, my friend. A recent (in-warranty) quality controller (Dell, HP, LSI, Adaptec, etc) is not going to have that problem...and honestly, who still uses Raid 5 in a production environment? You should not be using Raid 5 for an array over ~8TB period, and in most cases Raid 5 has been replaced with Raid 10.

ZFS also can't do nested raids, can it? (Raid 60 for example)

don't know. why you would spend twice as much money on drives (certainly when using 8+ disks, the minimal for raid6+0 correct?) for a home server just for raid 0 is beyond me, and raidz2 can be used with similar effect to raid6. farther still, you could use raidz3

EDIT----

how many disks is the op considering?
post #15 of 27
Quote:
Originally Posted by jrl1357 View Post

don't know. why you would spend twice as much money on drives (certainly when using 8+ disks, the minimal for raid6+0 correct?) for a home server just for raid 0 is beyond me, and raidz2 can be used with similar effect to raid6. farther still, you could use raidz3
EDIT----
how many disks is the op considering?

I'm confused on what you just said. Yes, 8 drives in the minimum for a Raid 60, but what do you mean "why would you spend twice as much on drives for a home server just for raid 0"? Raid 60 is two or more Raid 6s, striped...similar to striping 2 Raidz2 pools. I'm aware of raidz3, with 3 parity disks...but you are loosing even more write performance there. Raid 60 would allow for 4 drives to die (2 in each Raid6 subarray) and the array would still be fine. Of course, you should replace any failed disks immediately.

Not sure how many disks the OP is considering, let alone if Raid is involved (software, firmware, or hardware).
post #16 of 27
Thread Starter 
Well this will be a over time build i guess, i have ws 2008 r2 now. I would probably keep expanding drives as i get them, meaning i would be using raid, i dunno if it would be better to get a raid card or just use the sata ports on the mobo. Basically i want to be able to stream movies, music, pictures to other computers and media devices and be able to sit at any of my computers and store files on the server and back up to it. And maybe if possible host movies on a secured website to be able to watch from any computer, or whatever.
RED
(15 items)
 
SOLD
(22 items)
 
 
CPUMotherboardGraphicsGraphics
i5-6600k GA-Z170X-Gaming 7 ASUS GeForce GTX 1080 8GB ROG STRIX ASUS GeForce GTX 1080 8GB ROG STRIX 
RAMHard DriveHard DriveHard Drive
G.Skill Trident Z Crucial MX200 WD Red Mushkin Chronos 
MonitorKeyboardPowerCase
ASUS PB278Q Corsair Strafe Mx Reds Corsair RMX 750 Corsair 600c 
MouseAudioAudio
Roccat Nyth Fiio E10k Fostex T50rp 
CPUMotherboardGraphicsRAM
I5-2500k @ 4.5Ghz 1.335v GIGABYTE G1.Sniper 3 Evga Gtx 680 4GB @ 1280Mhz 1.213v Samsung 30nm 8Gb @ 2133 10-10-10-25 
Hard DriveHard DriveOptical DriveCooling
Crucial m4 Seagate Barracuda 2Tb TSSTcorp Blu ray Apogee HD 
CoolingOSMonitorKeyboard
Xspc Razor Gtx 680 W 7 Ultimate 64 Asus VW246H 24" SIGG 
PowerCaseMouseAudio
SeaSonic X-SERIES X-1050 Nzxt switch 810 matte black Razor Naga Creative Inspire Woofer, Sony Audio Controller ... 
OtherOtherOtherOther
MCP655 Variable with pump mod kit Xspc Rx360 Xspc Ex240 Multiport Bitspower 150 Inline Reservoir  
OtherOther
10x Bitspower 1/2" x 3/4" Compression fittings ... Primochill Uv Green tubing 
  hide details  
Reply
RED
(15 items)
 
SOLD
(22 items)
 
 
CPUMotherboardGraphicsGraphics
i5-6600k GA-Z170X-Gaming 7 ASUS GeForce GTX 1080 8GB ROG STRIX ASUS GeForce GTX 1080 8GB ROG STRIX 
RAMHard DriveHard DriveHard Drive
G.Skill Trident Z Crucial MX200 WD Red Mushkin Chronos 
MonitorKeyboardPowerCase
ASUS PB278Q Corsair Strafe Mx Reds Corsair RMX 750 Corsair 600c 
MouseAudioAudio
Roccat Nyth Fiio E10k Fostex T50rp 
CPUMotherboardGraphicsRAM
I5-2500k @ 4.5Ghz 1.335v GIGABYTE G1.Sniper 3 Evga Gtx 680 4GB @ 1280Mhz 1.213v Samsung 30nm 8Gb @ 2133 10-10-10-25 
Hard DriveHard DriveOptical DriveCooling
Crucial m4 Seagate Barracuda 2Tb TSSTcorp Blu ray Apogee HD 
CoolingOSMonitorKeyboard
Xspc Razor Gtx 680 W 7 Ultimate 64 Asus VW246H 24" SIGG 
PowerCaseMouseAudio
SeaSonic X-SERIES X-1050 Nzxt switch 810 matte black Razor Naga Creative Inspire Woofer, Sony Audio Controller ... 
OtherOtherOtherOther
MCP655 Variable with pump mod kit Xspc Rx360 Xspc Ex240 Multiport Bitspower 150 Inline Reservoir  
OtherOther
10x Bitspower 1/2" x 3/4" Compression fittings ... Primochill Uv Green tubing 
  hide details  
Reply
post #17 of 27
Quote:
Originally Posted by Thiefofspades View Post

Well this will be a over time build i guess, i have ws 2008 r2 now. I would probably keep expanding drives as i get them, meaning i would be using raid, i dunno if it would be better to get a raid card or just use the sata ports on the mobo. Basically i want to be able to stream movies, music, pictures to other computers and media devices and be able to sit at any of my computers and store files on the server and back up to it. And maybe if possible host movies on a secured website to be able to watch from any computer, or whatever.

then I see no reason not to go zfs
Quote:
Originally Posted by tycoonbob View Post

Quote:
Originally Posted by jrl1357 View Post

don't know. why you would spend twice as much money on drives (certainly when using 8+ disks, the minimal for raid6+0 correct?) for a home server just for raid 0 is beyond me, and raidz2 can be used with similar effect to raid6. farther still, you could use raidz3
EDIT----
how many disks is the op considering?

I'm confused on what you just said. Yes, 8 drives in the minimum for a Raid 60, but what do you mean "why would you spend twice as much on drives for a home server just for raid 0"? Raid 60 is two or more Raid 6s, striped...similar to striping 2 Raidz2 pools. I'm aware of raidz3, with 3 parity disks...but you are loosing even more write performance there. Raid 60 would allow for 4 drives to die (2 in each Raid6 subarray) and the array would still be fine. Of course, you should replace any failed disks immediately.

Not sure how many disks the OP is considering, let alone if Raid is involved (software, firmware, or hardware).

If you were going to strip them in raid0 (6+0) then you would need twice the drives for the same storage space
post #18 of 27
Quote:
Originally Posted by tycoonbob View Post

ZFS also can't do nested raids, can it? (Raid 60 for example)

If you put two raidz2 (raid6) arrays (vdevs) into the same ZFS pool you get the equivalent of raid 60. This is because ZFS stripes across all top level vdevs in a pool.

Although for an 8 disk configuration this is kind of a pointless configuration because you better off having four 2 disk mirrored vdevs in the pool. You will have the same capacity with similar redunancy but with twice the iops. This is because a ZFS vdev no matter how wide it is (whether it is 4 or 10 disks) still has the iops of a single disk. In addition you will have better expandability because you can just add more 2 disk mirrors to the pool instead of adding a 4 disk raidz2.

Most people make the mistake in thinking you expand your ZFS storage by trying to increase the size of the vdev array (which is not possible) or swapping out each disk for bigger disks which is just poor planning and a bad habit.
Edited by CaptainBlame - 10/10/12 at 5:36pm
post #19 of 27
Quote:
Originally Posted by jrl1357 View Post

If you were going to strip them in raid0 (6+0) then you would need twice the drives for the same storage space

Twice the drives for the same storage space? That is incorrect in with the idea of a Raid 60. If you were mirroring two Raid 6 arrays into a Raid 61 (theoretically, I don't think that exists at a hardware level -- you could build two Raid 6s at the hardware level, then mirror at the software level for a Raid 61), then yes...double drives with no increase of space.

Lets say you have 8 2TB drives in a HARDWARE Raid 60.

4 drives in Raid6 array #1 (~4TB usable space)
4 drives in Raid6 array #2 (~4TB usable space)

Then you stripe (Raid 0) these two Raid 6 arrays, you now have ~8TB of usable space.

If you are talking just a 8 drive Raid 60, you might as well do a Raid 10 since the performance of a 10 will be better (write speeds will be much better)...but if you grow that Raid 60 to 20 drives, for example...a Raid 10 is not as logical (if storage space is your ultimate goal). A 20 drive Raid 60, with 2TB drives would yield space such as:
10 drives in Raid6 array #1 (~16TB)
10 drives in Raid6 array #2 (~16TB)
Once stripped, usable space is ~32TB.

OR

5 drives in Raid6 array #1 (~6TB)
5 drives in Raid6 array #2 (~6TB)
5 drives in Raid6 array #3 (~6TB)
5 drives in Raid6 array #4 (~6TB)
Once stripped, usable space is ~24TB.

OR

6 drives in Raid6 array #1 (~8TB)
6 drives in Raid6 array #2 (~8TB)
6 drives in Raid6 array #3 (~8TB)
Once stripped, usable space is ~24TB with 2 hot spares.

So Raid 60 can go a number of different ways, depending on the amount of drives used. Stripping identical arrays, will always double usable storage space.
Quote:
Originally Posted by CaptainBlame View Post

If you put two raidz2 (raid6) arrays (vdevs) into the same ZFS pool you get the equivalent of raid 60. This is because ZFS stripes across all top level vdevs in a pool.
Although for an 8 disk configuration this is kind of a pointless configuration because you better off having four 2 disk mirrored vdevs in the pool. You will have the same capacity with similar redunancy but with twice the iops. This is because a ZFS vdev no matter how wide it is (whether it is 4 or 10 disks) still has the iops of a single disk. In addition you will have better expandability because you can just add more 2 disk mirrors to the pool instead of adding a 4 disk raidz2.
Most people make the mistake in thinking you expand your ZFS storage by trying to increase the size of the vdev array (which is not possible) or swapping out each disk for bigger disks which is just poor planning and a bad habit.

I wasn't aware ZFS stripped at the top level like that, but that is the same as a nested hardware raid. Yes, in a 8 drive config...it's better going with a Raid 10, over a Raid 60...unless you plan to grow that array. IOPS would increase no matter if it was hardware, firmware, or software...simple because it wouldn't have to do parity calculations anymore. It's a simple stripe, that gets mirrored.
A ZFS vdev only having the IOPS of a single drive, seems that performance can be limited over a hardware raid...if I understand you correctly.

So to confirm, you cannot expand a vdev (array), but instead create a new vdev (array) and stripe that to an existing vdev(s)??
post #20 of 27
Quote:
Originally Posted by tycoonbob View Post

A ZFS vdev only having the IOPS of a single drive, seems that performance can be limited over a hardware raid...if I understand you correctly.
So to confirm, you cannot expand a vdev (array), but instead create a new vdev (array) and stripe that to an existing vdev(s)??

The similarities between raidz and raid ends with capacity and parity disk count. The way the data is written is different, you will probably find the reason the iops of a zfs vdev is that of a single disk is the reason there is no write hole (I'm just guessing here). When designing a high performing zfs config you should always aim for more vdevs rather then wider vdevs in a pool.

Keep in mind raid has a write hole despite what you think, even if you have a controller with battery backup you still need your disks to have battery backup as well. Then there is a whole bunch of other reasons that makes zfs more desirable.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Servers
Overclock.net › Forums › Specialty Builds › Servers › What os for Home Server