SSD RAID 0 Stripe Size Differences (Benchmarks) + RAID 0 Mixing Different Drives vs Same Drive (Benchmarks) - Page 2 - Overclock.net - An Overclocking Community

Forum Jump: 

SSD RAID 0 Stripe Size Differences (Benchmarks) + RAID 0 Mixing Different Drives vs Same Drive (Benchmarks)

Reply
 
Thread Tools
post #11 of 15 (permalink) Old 04-01-2018, 04:43 AM
Iconoclast
 
Blameless's Avatar
 
Join Date: Feb 2008
Posts: 30,216
Rep: 3149 (Unique: 1877)
Quote: Originally Posted by roadkill612 View Post
I defr o smarter folks on the detailed logic, but I see no examination of what happens to the data once read e.g.
That's because stripe and cluster size have no bearing on memory word size, or vice versa. The data is all going to the same place and will be used the same way, irrespective of where it's read from, or in what size chunks the storage system is using. As soon as it's read, there are no more stripes/clusters.

...rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add 'within the limits of the law,' because law is often but the tyrant's will, and always so when it violates the right of an individual. -- Thomas Jefferson
Blameless is offline  
Sponsored Links
Advertisement
 
post #12 of 15 (permalink) Old 06-22-2018, 06:15 PM
Linux Lobbyist
 
Join Date: Jun 2018
Posts: 1
Rep: 0
Upon a quick review it appears this is a null result. Thank you for your tedsting and I am sure you do not want to keep testing haha, any good job! Thanks again.
arrrstin is offline  
post #13 of 15 (permalink) Old 01-27-2019, 07:14 PM
Maximum_Unleashed
 
Laithan's Avatar
 
Join Date: Mar 2015
Location: United States
Posts: 3,872
Rep: 502 (Unique: 267)
I wanted to see if I can breathe some life into this old thread since the OP is very informative and relative to my question, just doesn't quite specifically address it I don't think. No since duplicating what is already being discussed here.

I have an LSI 9260-8i and 6 fast SSDs. I want to configure them in RAID 0 and normally I would use a 64Kb stripe size for a RAID 0 array however I wanted to try and understand the relation to the stripe WIDTH a little better since in this case I have a large number of drives. Would the decision to use a particular stripe size be influenced by the stripe width? Is there more efficiency when using smaller stripe sizes as the stripe width increases? Having 6 SSD drives in RAID 0 is not common (I understand the risks) and discussions on this topic usually target 2 drive configurations.

I am not quite certain if the width "stacks".. for example with a 64Kb stripe, 6 drive configuration, would this operate effectively as a 384Kb stripe or does it not work like that?

Thanks!

GIGABYTE GTX 9xx G1 GAMING BIOS Tweaking
  ̿̿ ̿̿ ̿̿ ̿̿̿'̿'\̵͇̿̿\з=༼ຈل͜ຈ༽=ε/̵͇̿̿/'̿'̿ ̿ ̿̿ ̿̿ ̿̿  
░▒▓│ FORKAY‼ «  » WΘΘT ‼ │▓▒░
Xeon E5-1680 V2 (IVY-E) Inside


Laithan is offline  
Sponsored Links
Advertisement
 
post #14 of 15 (permalink) Old 07-11-2020, 01:09 PM
New to Overclock.net
 
Join Date: Jul 2020
Posts: 1
Rep: 1 (Unique: 1)
Stripe Size

The filesystem block size (cluster size for NTFS) is the unit that can cause excess waste for small files.
The RAID stripe size is simply how big each contiguous stripe is on each disk in a RAID 0/5/6 setup (RAID 1 is mirrored, so stripe size is inconsequential), and it should be tuned for the situation.

Let's look at two examples with MSSQL server, one with spinning disks and one with SSD disks.

Example 1: Five spinning disks in a RAID 5 setup:

You would want a 64k stripe size, which would provide a total stripe width of 256k of data and 64k of parity.
You would also want the cluster size to be 64k.
Assuming you set up your partition alignment correctly, your disks would align at 64k and SQL takes advantage of that, as extents in SQL are 64k.

A single SQL write, which is generally 64k, will perform the following, causing a 3x penalty:
Read the 64k of parity for the current stripe.
XOR toggle the parity data with the new 64k block that is to be written.
Write the new 64k block and the modified 64k parity block.

A single SQL read will be a 64k extent, thus moving the appropriate spinning disk to the proper stripe location and reading the 64k of data. This allows all spinning disks to be used simultaneously for efficient reading, incurring zero penalty.


Example 2: Five SSD disks in a RAID 5 setup:

You would want a 16k stripe size, which would provide a total stripe width of 64k of data and 16k of parity.
You would also want the cluster size to be 64k.
Assuming you set up your partition alignment correctly, your disks would align at 64k and SQL takes advantage of that, as extents in SQL are 64k.

A single SQL write, which is generally 64k, will perform the following, causing a 0.2x penalty:
XOR toggle the 64k into a 16k parity block.
Write the 64k block across four disks (16k each disk), and the 16k parity block as well.

A single SQL read will be a 64k extent, which will require four disks to each read 16k of the 64k requested. This would be disastrous for spinning disks, as each and every read would require all the disks to move the heads to the same location, but for an SSD it does not matter. This allows all SSD disks to be used simultaneously for efficient reading, incurring zero penalty.


So the stripe size is a tuning parameter that has to be taken into consideration depending on the type of disks, and how much reading vs writing that is being utilized.
It does not cause any waste when creating files, just how optimized the reads and writes might be.
Brain2000 is offline  
post #15 of 15 (permalink) Old 07-31-2020, 01:54 PM
Maximum_Unleashed
 
Laithan's Avatar
 
Join Date: Mar 2015
Location: United States
Posts: 3,872
Rep: 502 (Unique: 267)
Quote: Originally Posted by Brain2000 View Post
The filesystem block size (cluster size for NTFS) is the unit that can cause excess waste for small files.
The RAID stripe size is simply how big each contiguous stripe is on each disk in a RAID 0/5/6 setup (RAID 1 is mirrored, so stripe size is inconsequential), and it should be tuned for the situation.

Let's look at two examples with MSSQL server, one with spinning disks and one with SSD disks.

Example 1: Five spinning disks in a RAID 5 setup:

You would want a 64k stripe size, which would provide a total stripe width of 256k of data and 64k of parity.
You would also want the cluster size to be 64k.
Assuming you set up your partition alignment correctly, your disks would align at 64k and SQL takes advantage of that, as extents in SQL are 64k.

A single SQL write, which is generally 64k, will perform the following, causing a 3x penalty:
Read the 64k of parity for the current stripe.
XOR toggle the parity data with the new 64k block that is to be written.
Write the new 64k block and the modified 64k parity block.

A single SQL read will be a 64k extent, thus moving the appropriate spinning disk to the proper stripe location and reading the 64k of data. This allows all spinning disks to be used simultaneously for efficient reading, incurring zero penalty.


Example 2: Five SSD disks in a RAID 5 setup:

You would want a 16k stripe size, which would provide a total stripe width of 64k of data and 16k of parity.
You would also want the cluster size to be 64k.
Assuming you set up your partition alignment correctly, your disks would align at 64k and SQL takes advantage of that, as extents in SQL are 64k.

A single SQL write, which is generally 64k, will perform the following, causing a 0.2x penalty:
XOR toggle the 64k into a 16k parity block.
Write the 64k block across four disks (16k each disk), and the 16k parity block as well.

A single SQL read will be a 64k extent, which will require four disks to each read 16k of the 64k requested. This would be disastrous for spinning disks, as each and every read would require all the disks to move the heads to the same location, but for an SSD it does not matter. This allows all SSD disks to be used simultaneously for efficient reading, incurring zero penalty.


So the stripe size is a tuning parameter that has to be taken into consideration depending on the type of disks, and how much reading vs writing that is being utilized.
It does not cause any waste when creating files, just how optimized the reads and writes might be.
Thank you for this excellent information!

GIGABYTE GTX 9xx G1 GAMING BIOS Tweaking
  ̿̿ ̿̿ ̿̿ ̿̿̿'̿'\̵͇̿̿\з=༼ຈل͜ຈ༽=ε/̵͇̿̿/'̿'̿ ̿ ̿̿ ̿̿ ̿̿  
░▒▓│ FORKAY‼ «  » WΘΘT ‼ │▓▒░
Xeon E5-1680 V2 (IVY-E) Inside


Laithan is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off