Originally Posted by Amr0d
Is this still a thing? I am currently running a QNAP TS-420 with a single 4TB drive because I never had the need for more. Now I have upgraded to 4x4 TB and I was thinking about a RAID 5 first, then googled a bit and read a lot about how bad RAID 5 is for your NAS and that I should use RAID 10 instead. Why? How high is the risk of losing a drive and why should I lose two drives at once? I am running with a single drive for about 4 years now and never had problems.
These are not simple questions since many factors are at play. RAID 5 is or if speed is not a major factor. But, you only get one disk parity. basically it will still function of one disk dies. But lose another and the whole array goes down. RAID 10 is two drives striped and then those drives are mirrored onto another pair. So, RAID 1 (mirrored) and RAID 0 (striped) to get RAID 10. Again, one drive can fail and you are ok. If it's mirror fails as well, the array goes down.
Me personally, I only use RAID in storage servers. And on my FreeNAS server I run RAIDZ1, ZFS. ZFS trumps normal raid options as far as data integrity but has MASSIVE overhead. I have 32 GB of RAM in the server and it uses all of it. But, FreeNAS/ZFS will use all the RAM you give it. The more the better. ZFS is very complicated, but the basic thing is that it goes deeper in data checksums than any other RAID for better protection from things like bitrot. You sort of have a tree structure of checksums from memory all the way to the disk sectors. That is a really rough and dirty explanation. It also can be setup to do data scrubs. Basically run checksums across the drive array. It also runs these checksums every time tata is accessed and is smart enough to know what drives have the right data and what ones are corrupted. It will then fix the corrupted bits on the fly with out needing to rebuild the drive/array. Very cool stuff. There are pages and pages of info on ZFS. More than I have managed to read so far. lol But, that's the most basic bits. It's what is taking over the large storage world as it's making it's way into carious Linux server distros.
On the drive failure side, there is no real answer. I have known people that lost whole arrays when all the drives crapped out around the same time. All from the same lot from the manufacture. So they all shared some defect. That being said, how important is data integrity for you? Can you afford the downtime of rebuilding everything? If not, more redundancy is best. Like RAIDZ2 or RAIDZ3. You can lose two or three drives respectively with those flavors of ZFS. My server mostly holds my movie collection so, I am willing to risk a single drive failure. Regardless you should have a backup of your data. Either another machine, external drive, cloud, something.