So, for my personal rig, I decided to go a little overboard with the storage solution. I initially bought 2 Crucial M550 256GB drives, and then proceeded to buy 2 Samsung 850 EVO 250GB drives a few months later when they were released. I decided this was a great opportunity to look at different RAID 0 configurations with SSDs since I had so many, so I did and recorded all my results!
First off, I guess I should say that I'm using the integrated intel RAID controller on a Z97 chipset motherboard. You should also know that I used CrystalDiskMark 3.0.3 x64 for all of my tests and recorded every value I got into a spreadsheet.The Tests:
So I wanted to test a few things, firstly, I wanted to test the performance of RAID 0 with different stripe sizes, so every RAID has been tested in all of the stripe sizes available on the intel RAID controller (4kb, 8kb, 16kb, 32kb, 64kb, and 128kb).
- As for the categories, I wanted to test simple 2xM550 RAIDs as well as 2x850 EVO RAIDs so I can get some numbers and put them up here so other people looking to do this simple config can know a ballpark of exactly what they may be getting
- Secondly I wanted to test a simple config where I mixed one of each drive to see how it performs. While the M550 and 850 EVO perform pretty similarly in most categories, they use completely different internals, all the way to the controller, so I was interested to see how they would pair together
- Lastly I wanted to test all 4 SSDs in one large RAID 0 array since this is what I will be running on my main pc in the long run (with frequent back ups, of course) and I really wanted to push the array to the bandwidth limits of the PCH and see what the intel RAID controller can do when given an abundance of competent hardware.
- On top of these 3 RAID categories, I also took benchmarks of just a single M550, a single 850 EVO and a WD Black 1TB drive for reference (or for anyone looking for those numbers).
.The Testing Process:
To get the data, the RAIDs were created by restarting my computer and pressing ctrl+I when prompted to get into the Intel RAID configuration where I deleted the old RAID (if applicable) and created the new RAID 0 array with the drives and stripe size for the corresponding test. I then booted into windows from a separate drive, and once booted, windows disk manager was opened where the RAID was initialized with a GUID Partition Table. After this, an NTFS partition was created with all default values using all of the unallocated space on the RAID. Finally, CrystalDiskMark was opened and the tests were run. In the case of anomalies in the data, the tests were run again. If the anomalies persisted, I would restart the computer, delete and re-create the raid and re-test everything with the same settings. If I had to re-test, I kept the numbers from my re-test and discarded the original results.
For the single drive tests, the drive was formatted and all partitions were removed. A new NTFS partition was then created with all of the default values using all of the unallocated space in the drive and then the benchmarks from CrystalDiskMark were run and recorded.
All CrystalDiskMark tests were run with the default settings (Test Data was kept random, 5 passes for each test at 1000MB per test).The Results:
Some Numbers If You Wanted Them:CLICK HERE FOR EXCEL SPREADSHEET WITH ALL DATA/CHARTSConclusions:
In general, pretty much all write speeds were completely unaffected by the stripe size of the RAID. Only Read speeds were affected, and even then, the changes in speed between the different stripe sizes was basically negligible in all tests except the sequential read tests.
What does this mean you might ask? Well, the larger the stripe size you have in your RAID, the more space that you will waste, especially with lots of small files (especially if your RAID is your boot device). This means that you should set your stripe size for your RAID pretty low to optimize space since any performance difference between stripe size will be basically unnoticible or negligible unless your work relies heavily on lots of very large sequential reading, which is probably not the case for basically any consumer.
I personally decided to go with a 16kb stripe size after performing these tests which is what I have always suggested before performing this test too. It's also what Intel suggests as a stripe size for SSDs in RAID 0. This allows you to save most of your space because of the low stripe size, but not losing any potential performance issues with the tiny 4kb and 8kb stripe sizes.
These tests have also determined that mixing and matching different SSDs in an array is fine. The only thing to keep in mind is that if you have one drive that performs much worse in one category than the other drive(s) in the array, it will bring the speed of array for that category down, as expected. The M550 and 850 are fairly similar performing drives in almost all categories, so it can be kind of difficult to tell in these benchmarks, but it appears that when mixing two different drives in RAID, their combined performance will be the average of how the 2 drives would perform if put in a RAID of 2 similar disks. For example, in the 4k Read, the 850 EVO RAID was almost hitting 45MB/s while the M550's were hitting around 32.5MB/s in RAID, when combined, they hit around 38 or 39MB/s, right in the middle of those two marks. Something important to note when mixing different drives in a RAID, is that the read/write speeds at a high queue depth takes a major hit unless you are running a very large stripe size (like 64 or 128kb), but this is about the only thing that takes a huge hit when mixing different drives, and isn't a huge deal for most consumers.TL;DR:
Stripe size is basically negligible for RAID 0 except in a few specific, and rare cases. Since a higher stripe size leads to more wasted space I would recommend a 16kb stripe for SSD RAID 0 (and so dose Intel) regardless of the number of disks in the RAID. Mixing different SSDs in a RAID is fine, and their performance is as expected (average of the two drives' expected performance from their respective RAID 0 arrays with similar drives) except that high queue depth read/writes can take a hit except, but this hit is reduced at very high stripe sizes.