Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 465

post #4641 of 7150
Quote:
Originally Posted by diehardfan View Post
Anyone?

Thanks
Sorry, I didn't notice your post at first

I would start by reading this. Since you're new to RAID you should familiarize yourself with the terms and structures. This will pretty much answer all you need, and this knowledge will be very valuable in the future in order to maintain your array and save it from (un)certain doom.

I use 64 kB stripe size (which is the default IIRC) on both of mine. If I get some spare time I would like to compare the performance of different stripe sized on my RAID5 volume using ext4 vs JFS vs XFS.

There is no definite answer for the 'best' stripe size, as it depends on what operations the array will perform. If you will mostly be archiving large files (such as Blu Ray rips) then it might be prudent to use a larger stripe size.

If you were planning on frequently writing and reading smaller files then you should use a smaller stripe size.
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
post #4642 of 7150
I'll set:
Stripe Element Size – 64KB
Read Policy – No Read Ahead
Write Policy – Write Back

Is the above setting fine?
I do have a battery backup unit.

Thanks
post #4643 of 7150
Quote:
Originally Posted by diehardfan View Post
I'll set:
Stripe Element Size – 64KB
Read Policy – No Read Ahead
Write Policy – Write Back

Is the above setting fine?
I do have a battery backup unit.

Thanks
backup images and archiving data tends to be a lot of large sequential access, but that might depend on how your backup software works. "imaging" tends to just be one or two large files.

if you're dealing with a bunch of large files, and you're not doing random I/O on those files (like in the case of databases), then you'll want a large stripe size, certainly much larger than 64kb. However, in my own benchmarking, 1MB seems to result in lower performance... the ideal for large files seems to be around 256kb or 512kb.

read ahead policy should be whatever is optimal for your setup. if you're running Linux, Linux has a read-ahead mechanism that conflicts with the RAID controller's read-ahead so turning it off works best. However, on other OSes (Windows), it might be better to turn read-ahead 'on' or in 'adaptive' mode.

write policy is usually faster in 'write-back', BUT, if you'll constantly be writing large files that exceed the size of the cache on the controller, the write cache full+flushing can result in slower writes. For some reason, 'write-through' is more efficient at flushing the cash constantly and results in faster writes. in *most* scenarios, 'write-back' is the better setting.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #4644 of 7150
Quote:
Originally Posted by BLinux View Post
read ahead policy should be whatever is optimal for your setup. if you're running Linux, Linux has a read-ahead mechanism that conflicts with the RAID controller's read-ahead so turning it off works best. However, on other OSes (Windows), it might be better to turn read-ahead 'on' or in 'adaptive' mode.
I was not aware of that. Are you referring to the IO Scheduler? Have you ever tried changing the IO Scheduler?

I'm running CFQ right now, but I used to run deadline.....I haven't had the chance to do any significant testing on it, but supposedly it can make quite a large difference in overall system performance with with heavy IO.

Maybe you're talking about readahead....

Code:
READAHEAD(2)                                             Linux Programmer's Manual                                            READAHEAD(2)

NAME
       readahead - perform file readahead into page cache

SYNOPSIS
       #define _GNU_SOURCE             /* See feature_test_macros(7) */
       #include <fcntl.h>

       ssize_t readahead(int fd, off64_t offset, size_t count);

DESCRIPTION
       readahead() populates the page cache with data from a file so that subsequent reads from that file will not block on disk I/O.  The
       fd argument is a file descriptor identifying the file which is to be read.  The offset argument specifies the starting  point  from
       which  data  is  to be read and count specifies the number of bytes to be read.  I/O is performed in whole pages, so that offset is
       effectively rounded down to a page boundary and bytes are read up to the  next  page  boundary  greater  than  or  equal  to  (off‐
       set+count).  readahead() does not read beyond the end of the file.  readahead() blocks until the specified data has been read.  The
       current file offset of the open file referred to by fd is left unchanged.
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
post #4645 of 7150
Quote:
Originally Posted by binormalkilla View Post
I was not aware of that. Are you referring to the IO Scheduler? Have you ever tried changing the IO Scheduler?
No, the i/o scheduler is another issue. Linux implements a block device read-ahead buffer that can be adjusted with the 'blockdev' command, e.g.,

# blockdev --setra 32768 /dev/sda

In Linux, the I/O scheduler can be adjusted. In my experience tuning the I/O scheduler with various RAID controllers, I've found that 'deadline' or 'noop' do better than the 'cfq' scheduler, which is default on distros like RedHat/CentOS. With the PERC 5 and PERC 6 controllers, I got faster results with 'noop'. On HP servers, I had greater benchmark results using 'deadline'.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #4646 of 7150
Quote:
Originally Posted by BLinux View Post
No, the i/o scheduler is another issue. Linux implements a block device read-ahead buffer that can be adjusted with the 'blockdev' command, e.g.,

# blockdev --setra 32768 /dev/sda

In Linux, the I/O scheduler can be adjusted. In my experience tuning the I/O scheduler with various RAID controllers, I've found that 'deadline' or 'noop' do better than the 'cfq' scheduler, which is default on distros like RedHat/CentOS. With the PERC 5 and PERC 6 controllers, I got faster results with 'noop'. On HP servers, I had greater benchmark results using 'deadline'.
Interesting. I've never tried noop, only deadline and cfq. I'll have to see how it affects my system when I get the space time.

I'll need to check my read ahead policy next time I reboot.


Also are you using any sort of management software within Linux, or just by the BIOS? I found one called megactl that should be better than megamgr, since these cards are considered legacy devices and no longer get updates.

I haven't set it up yet though....school is keeping me busy
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
PWNzershreck
(15 items)
 
  
CPUMotherboardGraphicsRAM
4930K @ 4.6 GHz ASUS Rampage IV Black Edition MSI GTX 1080 FE Heatkiller Acetal 16 GB Corsair Vengeance 1600C9 
Hard DriveOptical DriveCoolingOS
2x Samsung 840 Pro  ASUS DVD-RW SATA Koolance 380i & 2x HW Labs 480GTX Arch Linux x86_64, Windows 7 x64 
MonitorKeyboardPowerCase
LG UC88-B Ultrawide, ASUS VS278Q Ducky Corsair AX1200i Caselabs STH10 
MouseMouse PadAudio
Logitech G500 Func 1030 ASUS Xonar Essence STX 
  hide details  
Reply
post #4647 of 7150
Quote:
Originally Posted by binormalkilla View Post
I'll need to check my read ahead policy next time I reboot.
The block device read-ahead can be adjusted without rebooting. See man pages for 'blockdev' command.

Quote:
Originally Posted by binormalkilla View Post
Also are you using any sort of management software within Linux, or just by the BIOS? I found one called megactl that should be better than megamgr, since these cards are considered legacy devices and no longer get updates.
I think you might be referring to MegaCLI? That should work fine, but I usually use Dell's OMSA tools which are easy to install and setup for RedHat/CentOS.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #4648 of 7150
Quote:
Originally Posted by BLinux View Post
backup images and archiving data tends to be a lot of large sequential access, but that might depend on how your backup software works. "imaging" tends to just be one or two large files.

if you're dealing with a bunch of large files, and you're not doing random I/O on those files (like in the case of databases), then you'll want a large stripe size, certainly much larger than 64kb. However, in my own benchmarking, 1MB seems to result in lower performance... the ideal for large files seems to be around 256kb or 512kb.

read ahead policy should be whatever is optimal for your setup. if you're running Linux, Linux has a read-ahead mechanism that conflicts with the RAID controller's read-ahead so turning it off works best. However, on other OSes (Windows), it might be better to turn read-ahead 'on' or in 'adaptive' mode.

write policy is usually faster in 'write-back', BUT, if you'll constantly be writing large files that exceed the size of the cache on the controller, the write cache full+flushing can result in slower writes. For some reason, 'write-through' is more efficient at flushing the cash constantly and results in faster writes. in *most* scenarios, 'write-back' is the better setting.
Thank you for that informative write up.
Exactly what I was looking for.
post #4649 of 7150
Today I had the chance to play around with my new Perc6/i as I just got my 2TB Hitachi drives.
I am getting stuck at this post and it does not do anything after this, see attached picture.

I was getting stuck at this same post message even when I did not have any hard drives plugged in, so I figured it needed HD's plugged in to get in but seems like even with the HD plugged in its still gets stuck there, the drives are plugged in on SAS 0 port.
btw, I am using a Intel DG965WH motherboard. I even tried taping the SMBus, still the same.

Help please?



Thanks
post #4650 of 7150
Diehardfan: have you gotten into the configuration utility to see if it sees the physical drives? I don't have that card (have the perc 5i) but it looks like it is saying there are no virtual drives set up, which would be the case if you have nothing plugged in or two drives plugged in that haven't been set up yet. Jump into the utility and see what it shows you. Might have to play with it a little to set up a virtual drive and the error should go away I would think.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks