Overclock.net banner
1 - 8 of 8 Posts

· Registered
lowest common denominator
Joined
·
189 Posts
money is much better spent on more ram and using primocache imo, and no worries of data loss on a drive failure. And the much more important stat is low que depth, but for some reason you changed the units from iops to mb so we can't compare that. either way, with the raid it's still only 71.94 which is pitiful. Raid usually increases latency as well which is never a good thing. My single 980 pro is faster than your 3 drive raid 0 at low que depth, without using primocache, and 10+x better with it. edit: but you already know this from the other thread :)
 

Attachments

· Registered
Joined
·
2,081 Posts
But let's be honest. I'm not on this forum because I want practical computing.

Single NVMe
View attachment 2605400

3 very fast (but by no means identical) NVMes in a RAID 0 array
View attachment 2605401

Stripe size is max 128k.
SSDs in RAID 0 is only good for benchmarks. Real world won't see much difference. In fact, depending on usage, your random read performance will drop with RAID 0 due to the extra overhead.

That said, I run my SSDs in RAID 0 for my Steam library but only because its an easy way to split the write's evenly between two SSD's and keeps files systems easier to manage.
 

· Still kinda lost
Joined
·
3,916 Posts

· Registered
Joined
·
5,629 Posts
But let's be honest. I'm not on this forum because I want practical computing.

Single NVMe
View attachment 2605400

3 very fast (but by no means identical) NVMes in a RAID 0 array
View attachment 2605401

Stripe size is max 128k.
What ssd is the first test?
If that's 450MB/s q1T1, then it must be Optane 5800x or it's running from cache. Normal ssd's is max just over 100MB/s q1t1 like samsung 990.
 

· Registered
lowest common denominator
Joined
·
189 Posts
its not mb/s. its latency. as I said above, for some reason they switched units.
 
  • Rep+
Reactions: Nizzen

· Registered
Joined
·
7 Posts
Discussion Starter · #7 ·
its not mb/s. its latency. as I said above, for some reason they switched units.
I am not sure why my first result looks like that. It's sort of impossible to replicate now that I am RAIDed. My first screenshot doesn't have a dropdown for units which is sort of bothering me. I tried to leave everything on default so it would be the same. The text files should have all the information.

The first drive is a 2 TB Acer Predator 2280 NVMe.
Code:
------------------------------------------------------------------------------
CrystalDiskMark 8.0.4 Shizuku Edition x86 (C) 2007-2021 hiyohiyo
                                  Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
  SEQ    1MiB (Q=  8, T= 1):  7104.573 MB/s [   6775.4 IOPS] <  1179.49 us>
  RND    4KiB (Q= 32, T=16):  4553.659 MB/s [1111733.2 IOPS] <   459.76 us>

[Write]
  SEQ    1MiB (Q=  8, T= 1):  6660.250 MB/s [   6351.7 IOPS] <  1255.83 us>
  RND    4KiB (Q= 32, T=16):  4245.036 MB/s [1036385.7 IOPS] <   492.95 us>

Profile: Peak
   Test: 1 GiB (x5) [C: 20% (380/1907GiB)]
   Mode: [Admin]
   Time: Measure 5 sec / Interval 5 sec
   Date: 2023/03/14 23:20:26
     OS: Windows 11 Professional [10.0 Build 22621] (x64)
The array is composed the Predator and brand-new Samsung 980 pro along with a gently used WD Black. It is controlled by the CPU.
Code:
------------------------------------------------------------------------------
CrystalDiskMark 8.0.4 Shizuku Edition x64 (C) 2007-2021 hiyohiyo
                                  Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
  SEQ    1MiB (Q=  8, T= 1): 19365.184 MB/s [  18468.1 IOPS] <   432.43 us>
  SEQ  128KiB (Q= 32, T= 1): 18480.312 MB/s [ 140993.6 IOPS] <   223.02 us>
  RND    4KiB (Q= 32, T=16):  1355.159 MB/s [ 330849.4 IOPS] <  1546.10 us>
  RND    4KiB (Q=  1, T= 1):    71.940 MB/s [  17563.5 IOPS] <    56.86 us>

[Write]
  SEQ    1MiB (Q=  8, T= 1): 15852.066 MB/s [  15117.7 IOPS] <   528.45 us>
  SEQ  128KiB (Q= 32, T= 1): 14944.341 MB/s [ 114016.3 IOPS] <   279.50 us>
  RND    4KiB (Q= 32, T=16):  1205.821 MB/s [ 294389.9 IOPS] <  1738.20 us>
  RND    4KiB (Q=  1, T= 1):   176.593 MB/s [  43113.5 IOPS] <    23.11 us>

Profile: Default
   Test: 1 GiB (x5) [C: 7% (379/5588GiB)]
   Mode: [Admin]
   Time: Measure 5 sec / Interval 5 sec
   Date: 2023/03/16 0:04:03
     OS: Windows 11 Professional [10.0 Build 22621] (x64)
Q32, T16
The Acer had over a million IOPS in half a millisecond. The array had less than a third of that over more than 1.5 milliseconds. It looks like the naysayers are vindicated. I sacrificed real-world performance for a synthetic sequential read/write benchmark.

I am not done messing around with RAID. This configuration seems to be inviting drive failure. I am going to put RAID 1/0 on a discrete controller. I am expecting read improvements but a drop in write speed. I also want to try dropping my stripe size down to 4K and see how that affects it.
 

· Registered
lowest common denominator
Joined
·
189 Posts
I messed around with raiding nvme drives when they first came out, about the same time Ryzen cpu's came out (I think). 960 evo's. and then again with 970 pro's. Hell I even did it with the very first gen 1 sata ssd's, same result. Multiple different stripe and block sizes. You can increase q8t1 (best case scenerio) mb/s, and to a much lesser degree, the next couple down. But the one that's the most important unless you spend a lot of time transfering incredibly large files from 1 very fast drive to another very fast drive, is Q1T1, and doesn't change a bit or sometimes goes lower. In every case, you increase latency. There are loads of comparison / benchmarks showing this. If you want speed where it matters the most there's really only 1 thing you can do besides spending thousands on a true raid card with cache on it, and that's to get, even a little bit, more ram and spend the 30 bucks for primo cache or some other similar program. It's real easy to set up. If you use it to increase write speed and use the delay function for even more speed, there is a chance you could corrupt a file if the system is shut down before it completes this process. Personally I write very little, so only use it for read speeds, and have no problems been real happy with it. Hell even if you dont get more ram and just dedicate a small amount like 1gb, that's still a helluva cache to have.

Edit: as far as drive failure, it's incredibly rare either way. That's not the reason I wouldn't do it. I don't do it cause it doesnt do anything for me except if I wanted to show off, half of the results from a benchmark. I also don't do it because it'll be a ton easier to move that drive(s) to a new system and be able to read it right away.
 
1 - 8 of 8 Posts
Top