Originally Posted by gonX
All dependant on controller. Minimum is 2 and it's not worth it when you go above 4 drives.
Originally Posted by That_guy3
How many drives can I have in RAID 0?
In an article at Tom's Hardware they tested an add-on RAID SATA card, which is anyhow a lot better than on board raid. They have found that 5 disks was the sweet spot, anything above was keeping the speed leveled.
Things to consider: a modern HDD has a sustained transfer rate above 60MB/s. On board SATA controllers are connected inside the southbridge with the I/O bus through a 1x PCIe lane (at best, some older are connected on PCI). A PCIe 1x gives 250MB/s, PCI only 133MB/s in theory . In practice PCIe 1x goes 225MB/s, while the PCI goes 115MB/s on average.
If a HDD can sustain 65MB/s , 4 of them will be at the limit of the PCIe interface (260MB/s). That's why GonX and others have said that the limit is 4 HDDs.
More powerful server-level RAID controllers can be connected on PCIe 4x slot (rare on mobos these days) for a max of 1GB/s which obviously allows more HDDs to be added up to the raid 0 matrix.
If a HDD from the RAID 0 fails, the data is lost. RAID 5 allows a bit of redundancy - one drive can fail, and the data is not lost, but the matrix works at very low speed until the fulty disk is replaced. RAID 5 supposes that one disk is kept for redundancy, so if you put 4 disks you'll be able to use the data from only 3 of them. RAID 5 adds also a massive CPU overhead, on-board controllers can handle it, but it's better to pass.
Hope it clears up some stuff, cheers!