Overclock.net › Forums › Components › Hard Drives & Storage › SSD › 4x 840 Pro 128GB in RAID-0 - Asus Z87 MB
New Posts  All Forums:Forum Nav:

4x 840 Pro 128GB in RAID-0 - Asus Z87 MB - Page 4

post #31 of 42
Hey parsec,like always great info.I didn't want to be sarcastic i just want to point person in right direction.Like i said one time i would install 11.2 on X79 even if that means 20 repairs of Windows 7 installation.And we know it is possible to do it because "doorules" prove it.
Why didn't you comment my post(number 21).
Edited by Unit Igor - 6/7/13 at 3:21am

Gear mentioned in this thread:

post #32 of 42
Quote:
Originally Posted by Unit Igor View Post

Hey parsec,like always great info.I didn't want to be sarcastic i just want to point person in right direction.Like i said one time i would install 11.2 on X79 even if that means 20 repairs of Windows 7 installation.And we know it is possible to do it because "doorules" prove it.
Why didn't you comment my post(number 21).

Igor, I know you were NOT being sarcastic at all, you help people all the time, I added the "it's not funny" comment because I laughed at it, and did not want that guy to think I was laughing at HIM. Your suggestion to use the IRST 11.7 driver is good advice and will improve his score.

Why did I not comment on post 21? I planned on doing that, but I'm just getting my Z87 build running, and had some problems. I had to install Windows 8 three (four?) times, since I was getting BSODs and then Windows was corrupted and would not boot, repair, etc. I was using IRST 12.6, and finally changed to 12.5, which is working fine now with a RAID 0 of two Intel 520's. Haswell CPUs are not as easy to OC as SB or IB, and the BIOS/UEFI CPU settings are not working right IMO. Anyway...
post #33 of 42
Speed is subjective you need to understand what the SSD's are dealing with before focusing on 4K reads

There is 2 area's I need to touch on first
  1. It was mentioned by estabya about 8bits in a byte ..However;-) the transfer is 8b/10b encoded so actually it takes 10bits to transfer 8bits of data and it is 6gbs not 6.144gbs therefore its 600MBs per channel bandwidth.
  2. Don't forget there is command overhead on the data the Controller has to request the SSD to do something and those command consume some of the available bandwidth of that 6gbs link. and obviously if your dealing with small files the greater the amount of command data being transferred also.

    [edit] obviously take into account there will be latency on the commands from issue to when that data is passed back (offset slightly via command queuing) but non the less the more its dealing with the great the degradation in overall speed from lower IOPS.

So put all that into context you have on tap about 2400MBs of bandwidth but with commands you prob more likely 2200MBs-2300MBs max.

Touching on 4K reads from what I remember that is a result of not being able to fall into a stripe size so a chuck of 4K data will only ever exist on a single drive.

Example - If you have a 64K stripe and 4 Drives and you write a 64K chunk of data 16K goes on each drive and you get the ability to read that 64KB back in one nice hit. but if it is a 4Kb data block only one drive will contain the needed info so no gain/benefit to having 4 drives on small block sizes.

However its very software dependent on those files sizes as we know, alot of Windows OS files are quite small block sizes but not everything and the same can be said with all software obviously Video files and such like benefit due to large file sizes and many modern games have large map files/texture files etc.

On a home user Level .. I use 2 Drives in raid 0 and cannot see myself using 1 drive I notice the difference but I do not think I would perceive 4 drives being faster but in a server environment its a different story esp with databases etc where you can get quite high Q-depths.

Stripe sizes also impact on performance and it can be hard to find the balance that works for you, small stripes benefit small block sizes but equally you run into over working the controller to manage the data large stripes sizes 128K seems peoples norm best for those large file max transfers but penalties on your small block transfers so you may see no gain and performance i.e not to different to a single drive on the small stuff but then above 32K upto your 128K stripe size it starts to scale as more drives are used in the data storage for that committed block.

Hope the above gives food for thought its never quite all its cracked up to be.
Edited by pmmeasures - 6/7/13 at 1:07pm
post #34 of 42
pmmeasures tnx,great info,now its clear to me one thing more.But forget about RAID field for a while and lets focus on one drive performance and SATA interface limitation.Would it be possible to make DDR3 based SSD ?(forget about volatile,we don't talk about that now)Why DDR3 nand can manage 4k reads so fast in fact better then 4k writes,why SSD nand cant do that?Is that sata limitation,controller limitation?What would happened if we put DDR3 nand in SSD would we see then 520mb/s 4k like we see it with DDR3 and ram-disk or something would brakes his 4k again?
Primo Ram-Disk and DDR3 on 2400mhz

Edited by Unit Igor - 6/7/13 at 1:55pm
post #35 of 42
Been my experience if you want all the focus to be on 4k read writes use a small stripe size, 8 or 16k usually gave me the highest numbers there. As mentioned your sequentials will suffer but not to where you would notice it I think.
post #36 of 42
Quote:
Originally Posted by Unit Igor View Post

I would not be disappointed,because who needs that speeds every day,all the time,nobody except servers.I am more then satisfied with my 1000mb/s seq.
But what i like to see now from manufactures is raising 4k numbers till they hit sata wall.
What do you think parsec who needs to work harder so we can see higher 4k?Is this job for controller or NAND manufacturers?

4K Random Read speeds at queue depth of one, meaning one I/O request is sent to the SSD, and another is not sent until the SSD completes the request, are slower than other types due to all the overhead/time used (or wasted) by the whole system. That includes the SSD, the file system, and the program sending the I/O requests. 4K random read (QD=1) looks like the slowest type of I/O, just looking at MB/s. One 4K random read I/O request is equal to one IOP (Note that is one IOP, not IOPs.) IOP means I/O oPeration, IOPs means I/O oPeration per second. AS SSD is of course sending many thousands of these single I/O requests to the SSD, which BTW is how IDE works.

These are the IOPs result of the benchmark I posted earlier:



Notice that for the sequential read speed test, the SSD did 68.40 IOPs, and the 4K random/QD=1 read test it did 7029 IOPs. Or put another way, for the sequential read speed test the SSD only had to perform 68.40 IOPs, while for the 4K random/QD=1 read test it had to do 7029 IOPs. That's over 100 times the amount of IOPs, and up to 100 times the overhead/time used.

Now look at the 4K-64Thrd speed and IOPs, 658.65MB/s and 168615 IOPs. 64Thrd means a QD=64, or 64 I/O requests sent to the SSD in one package, not just one request, courtesy of AHCI functionality (in RAID.) In reality, a consumer SSD can only handle, and AHCI driver can only send, 32 I/O (QD=32) requests at once, so two QD=32 I/O requests packages are sent to the SSD, one after another.

Given the advantage of 32 I/O requests sent in two groups to the SSD, it is able to perform just under 24 times the number of IOPs. The data rate is also almost exactly 24 times the single IOP/QD=1 data rate of 27.46MB/s.

We can see that removing the middle man (File system/OS) caused the data output to increase by 24 times. Of course, the SSD had to perform all those IOPs, but it can just fine.

If you think about it, I highly doubt that during an OS boot the QD=1, and the I/O requests are of mixed size, all the files are not the same size. So IMO 4K random/QD=1 I/O requests do not happen very often, and not at all during an OS boot.

When 4K performance is said to be important, that is true, but looking at QD=1 speeds, and very high QD=64 speeds are not what we should be concerned with, I'd like to see what happens at QD=5 or QD=10, as well as seeing some accurate data about what the QD actually is during OS boot, as well as the average file size.

Assuming that the slowest part of the I/O equation is file system overhead (or simply a lack of high I/O requests), that would explain two things: Why we can't tell the difference in actual use between a slower and very fast SSD, and why SSDs in RAID 0 don't seem much if at all faster than single SSDs, since the extra speed is not being used.
post #37 of 42
As Unit Igor was discussing drivers so I was wondering if there is any reason performance or otherwise to update to the newer IRST?


I'm currently running both raid 0 and raid 1, I'm using IRST 11.1.0.1006 which supports trim in raid 0

After running this driver for a year I've not had any problems with it, its performance is also excellent on my raid 0 ssd array.
Before I remember there was some bug with IRST on Sandybridge so I had to use an older driver.
post #38 of 42
Tnx again parsec,i almost hided under the table again when you start throwing numbers,but then i took deep breath giving my brain oxygen for all that q,iop and mb and finally read it whole and i hope understand it whole.About QD=5 or QD=10 isnt that something we can test with Anvil?
I mention 4k reads all the time only because it seems to me that they saffer most and if they manage to raise them every other QD will go up too.
But what you think about my post number 34?
Hey doo,4k stripe size is definitely something i will try and i mean not for one day,i will give it one month .
REP+ for all of you that like to talk about k,iops and QD.
post #39 of 42
Do you think 11.2.0.1006 because thats the driver i use and i have no plans to update.I tested them all except 12.6 and i return it to 11.2.
post #40 of 42
Quote:
Originally Posted by Unit Igor View Post

Do you think 11.2.0.1006 because thats the driver i use and i have no plans to update.I tested them all except 12.6 and i return it to 11.2.

I would guess 11.1.0.1006 and 11.2.0.1006 are the same driver. I think it's a great driver too and now I know I don't have to bother with newer ones, thank you thumb.gif
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: SSD

Gear mentioned in this thread:

Overclock.net › Forums › Components › Hard Drives & Storage › SSD › 4x 840 Pro 128GB in RAID-0 - Asus Z87 MB