post #1 of 1
Thread Starter 
Hello guys.

I was considering how to build up a storage for my rather old desktop, while squeezing most of performance out of the system, and some time ago I finally build it...

These are the components I had:

Asus Rampage Extreme (LGA 775)
Intel Xeon X3360 @ 3400Mhz
4x DDR3 1600Mhz

The mainboard had one PCI-E 2.0 x16 slot still free. Before I decided to buy any dedicated storage controller card I made few calculations.

Total Memory Bandwidht:: 25,6 GB/s
Maximum FSB bandwidht: 12,8 gb/s
PCI-E 2.0 x16: 8,0gb/s in one direction

Roughly half of all memory bandwidth was out of reach by CPU, but other devices (such as PCI-E cards) have possibility to use this potential. This is what I have chosen for the build:

LSI 9211-8i
http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9211-8i.aspx

Corsair Force 3 GS
http://www.corsair.com/en/ssd/force-series-gs-ssd/force-series-gs-128gb-sata-3-6gbs-solid-state-hard-drive.html

Why no cache memory on the Raid controller?
a) SSD drives should be fast enought to cache and process all the requests
b) Such raid controller is cheaper.

Why not x16 PCI-E card?
a) Limited system resources. Such high speed can cause extremely high load on memory.
b) Again, x8 is cheaper, and PCI-E x8 2.0 can still deliver up to 4000mb/s in one direction.

Why GS and not GT?
No particular reason. I liked the idea of Toggle NANDs.

Why these SSDs and LSI card?
As far I know LSI has acquired Sandforce so the disks should perfectly match the raid controller.

Results:
http://slayershrine.wz.cz/raids.png

The raid configuration is always RAID 0 (or single drive). No customizations to raid were made. Each SSD disk is When used more than 5 drives its certain that there is some sort of limit, most probably the raid controller itself.

Reliability:
For now I am running on this raid whole OS, and every game. Partition size is about 600gb. E-SATA HDD drive is used for system backup. The raid was build in May 2013. Since then I attempted to start the system without one disk, disk drives without power, incorrectly connected drives, I even removed Raid controller from PCI-E slot. The raid survived it all without any trouble.

At all Raid 0 is very risky, so most important data are stored on other drive.

OS and configuration details:
Win 7 x64
Trim is most probably not working
Indexing disabled
Swapfile disabled
Crashdump disabled

Why so many drives?
I have chosen this approach because more space on SSD flash drives means more cells to grow old smile.gif Also the limit of the RAID controller (around 2550mb/s read) can be used to hide real aging of the drive. When you have single SSD and you will make some writes after some months you can have 500mb/s read instead of 550mb. If I use ... lets say 8 drives, i will have constant read 2550 for a very long time...

The price:
This raid controller costs about 250 euro, 6 drives costed me about 750, so whole raid costed 1000 Euro. 6th drive was not used in final raid and I keep it for testing purposes and as a spare drive.

This is the fastest Raid I have seen on home desktop so far. Its very expensive, but affordable.