Originally Posted by Unit Igor
I would not be disappointed,because who needs that speeds every day,all the time,nobody except servers.I am more then satisfied with my 1000mb/s seq.
But what i like to see now from manufactures is raising 4k numbers till they hit sata wall.
What do you think parsec who needs to work harder so we can see higher 4k?Is this job for controller or NAND manufacturers?
4K Random Read speeds at queue depth of one, meaning one I/O request is sent to the SSD, and another is not sent until the SSD completes the request, are slower than other types due to all the overhead/time used (or wasted) by the whole system. That includes the SSD, the file system, and the program sending the I/O requests. 4K random read (QD=1) looks like the slowest type of I/O, just looking at MB/s. One 4K random read I/O request is equal to one IOP (Note that is one IOP
, not IOPs
.) IOP means I/O oPeration, IOPs means I/O oPeration per second. AS SSD is of course sending many thousands of these single I/O requests to the SSD, which BTW is how IDE works.
These are the IOPs result of the benchmark I posted earlier:
Notice that for the sequential read speed test, the SSD did 68.40 IOPs, and the 4K random/QD=1 read test it did 7029 IOPs. Or put another way, for the sequential read speed test the SSD only
had to perform 68.40 IOPs, while for the 4K random/QD=1 read test it had
to do 7029 IOPs. That's over 100 times the amount of IOPs, and up to 100 times the overhead/time used.
Now look at the 4K-64Thrd speed and IOPs, 658.65MB/s and 168615 IOPs. 64Thrd means a QD=64, or 64 I/O requests sent to the SSD in one package, not just one request, courtesy of AHCI functionality (in RAID.) In reality, a consumer SSD can only handle, and AHCI driver can only send, 32 I/O (QD=32) requests at once, so two QD=32 I/O requests packages are sent to the SSD, one after another.
Given the advantage of 32 I/O requests sent in two groups to the SSD, it is able to perform just under 24 times the number of IOPs. The data rate is also almost exactly 24 times the single IOP/QD=1 data rate of 27.46MB/s.
We can see that removing the middle man (File system/OS) caused the data output to increase by 24 times. Of course, the SSD had to perform all those IOPs, but it can just fine.
If you think about it, I highly doubt that during an OS boot the QD=1, and the I/O requests are of mixed size, all the files are not the same size. So IMO 4K random/QD=1 I/O requests do not happen very often, and not at all during an OS boot.
When 4K performance is said to be important, that is true, but looking at QD=1 speeds, and very high QD=64 speeds are not what we should be concerned with, I'd like to see what happens at QD=5 or QD=10, as well as seeing some accurate data about what the QD actually is during OS boot, as well as the average file size.
Assuming that the slowest part of the I/O equation is file system overhead (or simply a lack of high I/O requests), that would explain two things: Why we can't tell the difference in actual use between a slower and very fast SSD, and why SSDs in RAID 0 don't seem much if at all faster than single SSDs, since the extra speed is not being used.