Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 251

post #2501 of 7174
BLinux:
Quote:
You can use the MegaCli tools with the Dell firmware and the MegaCli tools give you access to more features that the Dell tools do not.
Why not just use MegaRAID Storage Manager, as it should run on any OS with JVM?
post #2502 of 7174
Quote:
Originally Posted by BLinux View Post
just for comparison, i ran the same iozone benchmark on a PERC 6/I with the same set of hard drives.

basically about 50-100MB/sec faster than PERC 5/I across the board on writes. Ironically, the PERC 5/I has 512MB cache + BBU while the PERC 6/I has 256MB cache + BBU. I'm not sure that increasing controller cache does much, though it's a cheap upgrade.
the 6/i is native pci-e while the perc 5/i has a pci-x to pci-e bridge
DDR2-400 (5/i) vs DDR2-667 (6/i)
and maybe sata ncq support in 6/i add something ?



http://www.dell.com/Downloads/Global...0255-Dixit.pdf
LL
Edited by RobiNet - 5/25/09 at 5:31am
post #2503 of 7174
Quote:
Originally Posted by AaronStalin View Post
BLinux:


Why not just use MegaRAID Storage Manager, as it should run on any OS with JVM?
Well, on my systems, I do everything via command line and I would *never* run Java on this system.

Quote:
Originally Posted by RobiNet View Post
the 6/i is native pci-e while the perc 5/i has a pci-x to pci-e bridge
DDR2-400 (5/i) vs DDR2-667 (6/i)
and maybe sata ncq support in 6/i add something ?



http://www.dell.com/Downloads/Global...0255-Dixit.pdf
That's a cool architectural diagram!!

However, when I think about it, with 8x HDD, even if each HDD did 100MB/sec, that would be 800MB/sec which is well within the 1GB/sec unless there are major bus efficiency issues. In my benchmarks, I'm not even getting close to 800MB/sec, so I think the 1GB/sec PCI-X bottleneck isn't something I've hit yet.

I do think that the DDR2-400 vs DDR2-667 may have something to do with the performance difference since the main performance gain was seen in sequential write speeds.

I don't know about NCQ, since the main advantage of NCQ is to re-order things to be more efficient. But in sequential I/O, I think NCQ has less affect; probably benefits random I/O a lot more. I haven't gotten serious about testing random I/O yet...

Looking at the diagram, one thing that isn't clear is what I/O processor does the PERC6/I have? Actually, what are the specs of the I/O processors used on both of these cards? (what clock frequency? data width? etc?)

And for the hardcore OCN guys... anyone know how to overclock these processors or the memory bus (and use faster memory)???
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #2504 of 7174
BLinux:
Quote:
Well, on my systems, I do everything via command line and I would *never* run Java on this system.
You're right. CLI-based applications are almost always the true way. But as i don't use Linux at this machine, i'll should search open-source equivalent of megacli, otherwise there will be no differ use megacli with Linux emulation, or use JVM-based GUI application, except then the second will be much more slower.
post #2505 of 7174
Quote:
Originally Posted by BLinux View Post
Looking at the diagram, one thing that isn't clear is what I/O processor does the PERC6/I have? Actually, what are the specs of the I/O processors used on both of these cards? (what clock frequency? data width? etc?)
PERC 5/i - Intel IOP333 + LSISAS1068 controller IC
PERC 6/i - LSI SAS1078 ROC (Raid-on-Chip)
Edited by RobiNet - 5/25/09 at 2:01pm
post #2506 of 7174
BLinux, why do you think DDR2-667 vs. DDR2-400 is causing the bottleneck for perc5 for close to half of supposedly peak b/w? 3.2GB/s is definitely faster than 1GB/s.

I think its the IOP AND the SAS controller which are making the difference. Both are newer revisions. RAM integration in the board might have reduced some data paths as well.
Predator
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 D0@4.4Ghz with 1.3125v, w3520 D0 @ 4.4Ghz EVGA X58 SLI GT9600 6x2GB OCZ Gold 1600MHz 
Hard DriveOSMonitorPower
120G Vertex, 256GB M4, 3xSeagates on LSI 9211-8i Linux amd64, XP64 Dell 24" Antec 850 TP Quattro 
Case
Thermatake Xaser VI 
  hide details  
Reply
Predator
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 D0@4.4Ghz with 1.3125v, w3520 D0 @ 4.4Ghz EVGA X58 SLI GT9600 6x2GB OCZ Gold 1600MHz 
Hard DriveOSMonitorPower
120G Vertex, 256GB M4, 3xSeagates on LSI 9211-8i Linux amd64, XP64 Dell 24" Antec 850 TP Quattro 
Case
Thermatake Xaser VI 
  hide details  
Reply
post #2507 of 7174
I believe the Perc 6i has a 500MHz, PowerPC-like processor. I'm not sure about the 5i architecture or clockspeed.

The block diagram is from the PERC5/6 comparison document from Dell. It is easy to find from Google.

I think most of the difference between the 2 comes from the processor - as previouslt stated, the actual speeds on both are well below the bottlenecks mentioned on the specs.
post #2508 of 7174
Quote:
Originally Posted by BLinux View Post
just an update: a few days ago i posted about my read speeds being slow (100-200MB/sec) even though my write speeds were fast (300-400MB/sec). I finally figured out what it was: I had to tune the read-ahead caching on the logical volume device *in addition* to tuning it similarly on the actual block device, which is set to 32MB.

Here are the results from iozone:



the iozone command was:

iozone -b results.xls -s 12g -r 4m -S 2048 -i 0 -i 1 -t 2

in this test i'm mostly interested in sequential I/O, using 4mb blocks, 2 threads, and writing 12GB files. (the system has 16GB, so 2x12GB=24GB should be plenty to get me outside of filesystem caching effects)

setup is:
PERC 5/I, 512MB cache + BBU
8x 500GB HDD (WD RE3, 16MB cache) in RAID5
Stripe size=128kb
read ahead = *no read ahead* (NRA) - using the read-ahead of the OS
cache policy = cached
write cache = write-back
disk cache = enabled

OS: CentOS 5.3 64bit
Filesystem: XFS, formatted to match the underlying RAID geometry
Mount options: rw,noatime,logbufs=8,logbsize=256k,nobarrier
Read-Ahead Cache: 32MB
I/O Scheduler: noop (letting the RAID device figure things out)

The read speeds are now up to 525MB/sec which is *MUCH* better than the 150~200MB/sec i was getting before. Granted, this is not fully tuned yet, but it's at least acceptable performance.
I have similar results, am getting 150~200MB/sec although I am using Vista 64 SP1, 4x PCI-E and 3x RE2 500GB in Raid 5 w/ 128k block size, adaptive read ahead and caching enabled.

Can you tell me how you 'tuned' your caching on your logical volume please?
post #2509 of 7174
Quote:
Originally Posted by Villainstone View Post
Hey fellas I have a pretty simple question... In the BIOS how should I set up a RAID0 array using SSD's? I mean should I enable WB, and what about Read Priority? I know I should have 128 stripe, but what about the three check boxes of the bottom of the advanced settings area? Thanks for the help.
were not really sure either pioneer it for us and show your results please
what ssd's you using
post #2510 of 7174
Quote:
Originally Posted by Jimbwlah View Post
I have similar results, am getting 150~200MB/sec although I am using Vista 64 SP1, 4x PCI-E and 3x RE2 500GB in Raid 5 w/ 128k block size, adaptive read ahead and caching enabled.

Can you tell me how you 'tuned' your caching on your logical volume please?
Hi, sorry, I don't know much about Vista. However, you need to put your PERC 5/I in a 8x lane PCI-E, otherwise you are limited to under 1GB/sec. Also, you are using 3x HDD in RAID5, which is *effectively* 2x HDD and you're getting 150~200MB/sec so that means each *effective* HDD is doing about 75~100MB/sec!! This is actually very good whenever you can get a SATA hard drive to push close to 100MB/sec, you are close to the limit of your hardware.

If you want something faster, you need to add more HDD. You also may want to move to a 8x lane PCI-E slot though that only matters if you are going to add more HDD.

As to answer your question, though in the context of my Linux server; i set the controller's read cache policy to "NRA", aka "No Read Ahead". Linux has its own read ahead caching mechanism, and doing it twice actually reduces speed by 5~10%. On the other hand, in Linux, i increased the read ahead cache from the default (256K) to 32MB with this command:

# blockdev --setra 32768 /path/to/your/raid/array/block/device

I've experimented with a few values here, from 8MB, 16MB, 32MB, 64MB. Somehow, 32MB seems to yield the best results. 64MB of read ahead cache seems to reduce read throughput of the iozone benchmark. Although I'm happy to have figured this value out, I can't explain *why* this is the best value.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks