Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › SSD RAID 0 Stripe Size Differences (Benchmarks) + RAID 0 Mixing Different Drives vs Same Drive (Benchmarks)
New Posts  All Forums:Forum Nav:

SSD RAID 0 Stripe Size Differences (Benchmarks) + RAID 0 Mixing Different Drives vs Same Drive (Benchmarks)

post #1 of 8
Thread Starter 
Intro:

So, for my personal rig, I decided to go a little overboard with the storage solution. I initially bought 2 Crucial M550 256GB drives, and then proceeded to buy 2 Samsung 850 EVO 250GB drives a few months later when they were released. I decided this was a great opportunity to look at different RAID 0 configurations with SSDs since I had so many, so I did and recorded all my results!

First off, I guess I should say that I'm using the integrated intel RAID controller on a Z97 chipset motherboard. You should also know that I used CrystalDiskMark 3.0.3 x64 for all of my tests and recorded every value I got into a spreadsheet.

The Tests:

So I wanted to test a few things, firstly, I wanted to test the performance of RAID 0 with different stripe sizes, so every RAID has been tested in all of the stripe sizes available on the intel RAID controller (4kb, 8kb, 16kb, 32kb, 64kb, and 128kb).

  • As for the categories, I wanted to test simple 2xM550 RAIDs as well as 2x850 EVO RAIDs so I can get some numbers and put them up here so other people looking to do this simple config can know a ballpark of exactly what they may be getting
  • Secondly I wanted to test a simple config where I mixed one of each drive to see how it performs. While the M550 and 850 EVO perform pretty similarly in most categories, they use completely different internals, all the way to the controller, so I was interested to see how they would pair together
  • Lastly I wanted to test all 4 SSDs in one large RAID 0 array since this is what I will be running on my main pc in the long run (with frequent back ups, of course) and I really wanted to push the array to the bandwidth limits of the PCH and see what the intel RAID controller can do when given an abundance of competent hardware.
  • On top of these 3 RAID categories, I also took benchmarks of just a single M550, a single 850 EVO and a WD Black 1TB drive for reference (or for anyone looking for those numbers).
.
The Testing Process:

To get the data, the RAIDs were created by restarting my computer and pressing ctrl+I when prompted to get into the Intel RAID configuration where I deleted the old RAID (if applicable) and created the new RAID 0 array with the drives and stripe size for the corresponding test. I then booted into windows from a separate drive, and once booted, windows disk manager was opened where the RAID was initialized with a GUID Partition Table. After this, an NTFS partition was created with all default values using all of the unallocated space on the RAID. Finally, CrystalDiskMark was opened and the tests were run. In the case of anomalies in the data, the tests were run again. If the anomalies persisted, I would restart the computer, delete and re-create the raid and re-test everything with the same settings. If I had to re-test, I kept the numbers from my re-test and discarded the original results.

For the single drive tests, the drive was formatted and all partitions were removed. A new NTFS partition was then created with all of the default values using all of the unallocated space in the drive and then the benchmarks from CrystalDiskMark were run and recorded.

All CrystalDiskMark tests were run with the default settings (Test Data was kept random, 5 passes for each test at 1000MB per test).

The Results:

Graphs biggrin.gif

















Some Numbers If You Wanted Them:





CLICK HERE FOR EXCEL SPREADSHEET WITH ALL DATA/CHARTS



Conclusions:

In general, pretty much all write speeds were completely unaffected by the stripe size of the RAID. Only Read speeds were affected, and even then, the changes in speed between the different stripe sizes was basically negligible in all tests except the sequential read tests.

What does this mean you might ask? Well, the larger the stripe size you have in your RAID, the more space that you will waste, especially with lots of small files (especially if your RAID is your boot device). This means that you should set your stripe size for your RAID pretty low to optimize space since any performance difference between stripe size will be basically unnoticible or negligible unless your work relies heavily on lots of very large sequential reading, which is probably not the case for basically any consumer.

I personally decided to go with a 16kb stripe size after performing these tests which is what I have always suggested before performing this test too. It's also what Intel suggests as a stripe size for SSDs in RAID 0. This allows you to save most of your space because of the low stripe size, but not losing any potential performance issues with the tiny 4kb and 8kb stripe sizes.


These tests have also determined that mixing and matching different SSDs in an array is fine. The only thing to keep in mind is that if you have one drive that performs much worse in one category than the other drive(s) in the array, it will bring the speed of array for that category down, as expected. The M550 and 850 are fairly similar performing drives in almost all categories, so it can be kind of difficult to tell in these benchmarks, but it appears that when mixing two different drives in RAID, their combined performance will be the average of how the 2 drives would perform if put in a RAID of 2 similar disks. For example, in the 4k Read, the 850 EVO RAID was almost hitting 45MB/s while the M550's were hitting around 32.5MB/s in RAID, when combined, they hit around 38 or 39MB/s, right in the middle of those two marks. Something important to note when mixing different drives in a RAID, is that the read/write speeds at a high queue depth takes a major hit unless you are running a very large stripe size (like 64 or 128kb), but this is about the only thing that takes a huge hit when mixing different drives, and isn't a huge deal for most consumers.

TL;DR:

Stripe size is basically negligible for RAID 0 except in a few specific, and rare cases. Since a higher stripe size leads to more wasted space I would recommend a 16kb stripe for SSD RAID 0 (and so dose Intel) regardless of the number of disks in the RAID. Mixing different SSDs in a RAID is fine, and their performance is as expected (average of the two drives' expected performance from their respective RAID 0 arrays with similar drives) except that high queue depth read/writes can take a hit except, but this hit is reduced at very high stripe sizes.
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
post #2 of 8
That's a lot of data! Nice Job! thumb.gif

I'm not entirely sure but isn't a large stripe size not efficient for large amounts of randomized access? Does the SSD controller have to read the whole block before moving on to the next one?

What I know for certain though is that you do not want to under/over size your stripe size with traditional hard drives. It'll make them more prone to fragmentation which can kill throughput on large files. Likewise, over grossly over-striping can make the read-head have to jump larger amounts unnecessarily which can kill frequent random file access.
    
CPUMotherboardGraphicsRAM
i7 970 4.15 @ ~1.39v HT ON, Turbo Off EVGA x58 3x SLI 2x EVGA GTX 980 SLI (watercooled), 1x EVGA GTX ... 6 GB 1600 OCZ DDR3 Gold Edition 7-7-7-18 @1475Mhz 
Hard DriveOptical DriveOSMonitor
Samsung 840 SSD 250GB, 2xSamsungF3 1TB (Raid0) 22x Super Multi, 8x Blu-ray Reader Windows 7 Ultimate x64 2x Yamakasi Catleap Q270s (2560x1440) 
PowerCaseMouseAudio
Kingwin 1000w Platinum HAF 932 Black Interior Logitech G500 Logitech Z5500 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
i7 970 4.15 @ ~1.39v HT ON, Turbo Off EVGA x58 3x SLI 2x EVGA GTX 980 SLI (watercooled), 1x EVGA GTX ... 6 GB 1600 OCZ DDR3 Gold Edition 7-7-7-18 @1475Mhz 
Hard DriveOptical DriveOSMonitor
Samsung 840 SSD 250GB, 2xSamsungF3 1TB (Raid0) 22x Super Multi, 8x Blu-ray Reader Windows 7 Ultimate x64 2x Yamakasi Catleap Q270s (2560x1440) 
PowerCaseMouseAudio
Kingwin 1000w Platinum HAF 932 Black Interior Logitech G500 Logitech Z5500 
  hide details  
Reply
post #3 of 8
Thread Starter 
Quote:
Originally Posted by Klue22 View Post

I'm not entirely sure but isn't a large stripe size not efficient for large amounts of randomized access? Does the SSD controller have to read the whole block before moving on to the next one?.

Yeah, in theory a larger stripe size is worse for lots of random access, particularly in conjunction with smaller files, but in practice it makes a minimal difference, It's possible that different SSD's may be better/worse than others at this, but the M550 has the sandforce controller and the 850 EVO has the samsung controller, so these tests cover a wide range of SSDs with their respective controllers. Alligning your stripe size with your block size (aka cluster size or allocation unit) with your stripe size on your RAID can apparently help as well, but I'm not going to test this. If you really want the best random read speeds possible, you should be looking at not only stripe size, but partition block size as well, but even with both of these, they should not be making very large differences in actual performance with SSDs
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
post #4 of 8
In theory would larger stripe size results in faster reads? because the number of "units" on disk to read a file is reduced.
    
CPUMotherboardGraphicsRAM
i7 980x @4G 1.25v Rampage III formular GTX 680 GSkill 24G 
Hard DriveHard DriveCoolingOS
Two Samsung Pro SSD on RAID0 4WD hard drives on RAID10 Water Win 8.1 x64 
PowerCase
CORSAIR CMPSU-850TX COOLER MASTER HAF 932 Full Tower 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
i7 980x @4G 1.25v Rampage III formular GTX 680 GSkill 24G 
Hard DriveHard DriveCoolingOS
Two Samsung Pro SSD on RAID0 4WD hard drives on RAID10 Water Win 8.1 x64 
PowerCase
CORSAIR CMPSU-850TX COOLER MASTER HAF 932 Full Tower 
  hide details  
Reply
post #5 of 8
Thread Starter 
Quote:
Originally Posted by fresh024 View Post

In theory would larger stripe size results in faster reads? because the number of "units" on disk to read a file is reduced.

Not necessarily, there would be a far more significant impact on cpu usage before you would start to see a noticeable difference in read/write speeds, but it also greatly depends on the size of the file(s) and the ssds themselves, how their controller handles the calls, and many other factors.
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
    
CPUMotherboardGraphicsGraphics
Intel Core i7 4790k MSI Z97 GD-65 Gaming MSI GTX 980 Ti Gaming 6G MSI GTX 980 Ti Gaming 6G 
RAMHard DriveHard DriveHard Drive
G.SKILL Trident X Series 4x8GB (2400mhz CAS 10) Western Digital Black (2013) 2xCrucial M550 256GB 2xSamsung 850 EVO 250GB 
Optical DriveCoolingCoolingCooling
ASUS 24X DVD Burner NZXT Kraken x61 2xCorsair SP140 LED Red 2xCorsair AF140 LED Red 
CoolingCoolingCoolingOS
NZXT FZ-200mm LED Red Noctua NF-P14s redux-1500 PWM Grid+ V2 Microsoft Windows 8.1 Pro with Media Center 
MonitorMonitorMonitorKeyboard
Acer GN246HL Acer XB270HU Asus PB258Q Rosewill Apollo RK-9100xRBR 
PowerCaseMouseMouse Pad
EVGA SuperNOVA 1000 G2 NZXT Phantom 530 (Black) Mionix Naos 7000 Extended Gaming Mouse Mat 
AudioAudioAudioAudio
Objective 2 DAC Objective 2 Amp AKG K7XX Antlion Audio ModMic 4 
  hide details  
Reply
post #6 of 8
Really love this article as it also brings up smth most people don't want to discuss. namely that Raid 0 pushes more iops at higher queue depths. Now it is very popular to claim that Raid 0 is just for the synthetic benchmarks, but take virtual machines e.g :
Booting a virtual machine (like vmware workstation) makes very good use of sequential speed- advantage of Raid 0 - potentially. Ok, so that had nothing to do with iops. But back to the point, if you are running multiple VM's and doing some serious work (Visual Studio compiling, database etc.) and reaching queue depth beyond 4, then why wouldn't those higher iops help you ? I am just airing this as it is far too simple to buy into the "raid 0 is just a hype" slogan being pushed on us by the different experts.
May be it was not the intention of your article to discuss this topic but I love the effort ! i would only hope that if it is the trend( as it seems to be) to add the mixed workloads and higher queue depths to testing and benchmarking in reviews I would only hope they bring in interesting comparisons. Like raids.
Edited by PushT - 12/12/15 at 3:09pm
post #7 of 8
" Well, the larger the stripe size you have in your RAID, the more space that you will waste"
^^^^^This statement is totally incorrect. Stripe size has absolutely no bearing on space usage (unlike cluster size).
If a stripe isn't fully populated (filled) by a single file, then the next file of files will be placed into that stripe until it is completely "filled".
In other words a single stripe can hold multiple files and stripes are always completely filled before disk writes continue on to the next stripe,
which is on the next disk. Your description is more relevant to cluster size or allocation units, but is completely inaccurate with regard to stripe sizes.
Stripes are always completely filled, even if it isn't filled on the first try it will eventually be used for the next write(s) until it is filled.
post #8 of 8
Thank you sooo much for you work!

I wonder what the results would have been if a test would have been used that measures a read/write package size that is within the range of stripe sizes offered by the controller.
As it is now, there is only 4k and 512k, so none of this measurements show the impact of the different stripe sizes for small files since all stripe sizes used are >=4k and <512k.
This way the 4k benchmark is never striped and the 512k benchmark is always striped.

I would be interesting to see the impact of the different stripe sizes from 4k to 128k when reading and writing 32k pieces, for example.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › SSD RAID 0 Stripe Size Differences (Benchmarks) + RAID 0 Mixing Different Drives vs Same Drive (Benchmarks)