Originally Posted by ghabhaducha
Nicely done! Thanks for sharing this info, I was curious about it. I do have a question, you mentioned that you are using a Z170, and like you said both SM951's on M.2 are connected via the chipset (PCIe 3.0) to the DMI3. According to Wikipedia
, DMI3 has a maximum bandwidth of close to 4GB/s, "...for a total of four lanes and 3.93 GB/s for the CPU–PCH link..." Do you think perhaps that the DMI3 bus is being saturated with your 2x Ultra M.2 (PCIe 3.0 x4 = 32Gbps ~ 4GB/s each) slots running simultaneously? Perhaps 3190MB/s Read is the maximum after overhead on the DMI3 -> dual M.2 w/Intel RAID0. Again, feel free to correct me if my math is wrong some where.
I do agree with you though that Intel will upgrade their drivers in the later releases to be more efficient.
Thanks for your comments! So you guys are more impressed with these results than I am. I guess I'm a jaded and spoiled SSD user that expected PCIe SSDs in RAID 0 to perform in the same way that SATA SSDs do. More on that below.
If you look at my picture of the IRST Windows program in my previous post, where the detail for one of the SM951s is shown
, it includes this:PCIe link speed: 4000 MB/s
PCIe link width: x4
I'm not sure what that means. Is that the speed/bandwidth available for one SSD in an M.2 DMI3 x4 port? Or is that the total speed/bandwidth available from the Z170 chipset?
Also, is that the theoretical maximum speed, like SATA III is sometimes quoted as 600MB/s, but the real world maximum speed of SATA III is ~540MB/s?
Given what else I've seen from a few users of the Z170 Extreme7+ board (I'm a moderator on ASRock's new support website http://forum.asrock.com/default.asp
), that have three
AHCI SM951s in RAID 0, the maximum the DMI3 buss is capable of is ~3,200 MB/s real world/with overhead, from a theoretical maximum of 4,000 MB/s, as you said.
This is a benchmark I copied from a thread about using the SM951 in the ASRock forum. This is three 512GB AHCI SM951s in RAID 0, 128K stripe:
The differences between this benchmark and mine are not huge, and mostly greater in the write speed results.
When using RAID 0 with SATA drives it is common to see "scaling" in benchmark results. That is, if one SSD can perform sequential reads at 500 MB/s, then two SSDs in RAID 0 will give 1,000 MB/s, three SSDs will be at 1,500 MB/s, etc. Eventually the bandwidth limit of the SATA interface is reached, and scaling disappears. Scaling is not always perfect and does not occur for all areas of performance (4K random in particular), but scaling with PCIe SSDs in RAID 0 with the Z170 DMI3 interface seems to drop off abruptly beyond two SSDs. That also seems to indicate the maximum bandwidth of the DMI3 buss is ~3,200 MB/s.
I have a feeling that the DMI3 buss will give us better RAID 0 results with SATA III SSDs then we have seen in the past.
So I agree with your thoughts about the DMI3 interface in the Z170 chipset. I haven't found much information to confirm it, and haven't studied the Z170 datasheet yet.
Glad you liked my post, but not surprised at all.
Actually, I've used both 184.108.40.2060, and 220.127.116.119. I didn't think they were different in performance. The Release Notes for 18.104.22.1680 had one bug fix listed, and nothing else.
What I'm looking for is a new Option ROM, I have the 22.214.171.1241 version in my board's UEFI. Intel removed the IRST 126.96.36.1991 driver from their download page, which makes me wonder about that Option ROM.
There's a story about the stripe size, starting with users of the Z170 Extreme7+ board and two or three SM951s in RAID 0. They were complaining about the performance results with benchmarks. As you know, the default stripe size for SSDs in RAID 0 is 16K. That is what the two users of three SM951s in RAID 0 used. This is what the same user as the benchmark above got with the 16K stripe:
As he said in a post with that benchmark, the result is worse than using one
of his 512GB AHCI SM951s. My Anvil result with one 256GB AHCI SM951 had a better overall score than his Anvil test with three 512GB AHCI SM951s in RAID 0, 16K stripe. It was the read speed results that were very low with a 16K stripe. I tried the 16K stripe myself, and got this result:
For some reason, the small stripe sizes with the AHCI SM951s had poor sequential read speed results in benchmarks. I tried a 64K stripe, and the sequential read speed was better than a 16K stripe, but less than a 128K stripe. I never tried a 32K stripe, but I doubt it would not follow the trend I saw otherwise. We can see the same thing with SATA SSDs in RAID 0, but not to the degree seen with the SM951s.
I'm not saying you are wrong about using a 32K stripe size, just explaining what I learned.
Originally Posted by Nizzen
The best is a single sm951 nvme thar does 60MB/s 4k random read qd=1
For OS use, I agree. That is why I'm considering going back to a single AHCI SM951 as the OS drive: