Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 277

post #2761 of 7150
Thread Starter 
Quote:
Originally Posted by the_beast View Post
You might actually be faster using a single 15K disk rather than RAIDing them together. Adding the RAID controller will actually increase your access times slightly, although the sequential benchmarks will shoot up.
Interesting.... I assume this would be due to additional overhead of the more complex controller? How much more latency would be added since the controller is running in the MHz range?
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
post #2762 of 7150
I decided to give up on the SAS idea. I searched for the cables and its too much of an inconvinienve. I must practically sacrifice 4 ports to use the SAS discs as I found no cable that provides 2 SAS and 2 SATA or 1+3 ports. But even if they existed, the cost adds up, and the drives from ebay have 0 warranty. I decided to let it go...

I will do upgrade to 1tb drives though, (maybe even 1.5tb , the gb/price ratio is the same). I see your point about the Green drives, and although I aggree, I don't pay the power bill so the gain is essentially mute to me, I prefer to squeese as much pwerformance from the raid as I can...

But the 7200.12 benchmarks are unimpressive considering the 50% increase in platter size so I will wait a month or 2 to see the 7200 500gb platter offerings from all companies (especially western) and do the change then...with both prices being better and more options available. I might even go for 3x1.5tb drives considering price will be the same and even more flexibility for expansion...although maybe 3 drives total is too low to get good speed in raid5...so stil thinking about it...
post #2763 of 7150
Quote:
Originally Posted by DJZeratul View Post
Running 6.1.1-0147 at the moment. I am looking forward to flashing 6.2.0 sometime in the near future as I hear there is a significant performance increase. I cannot flash using the MegaRAID tool though so I have to make a boot CD and I have been too busy/lazy to do so lately. Maybe this weekend, as its a 3 day-er
Thanks man! I have a few more noob questions:

- Where do I get the drivers?

- Can I flash from my current drivers (I'm not at my desktop right now so I don't know which one I'm currently running) to the most recent, or do I have to flash every version between where I'm at now to the most recent?

- Lastly, how do I flash the drivers?

I appreciate all the help!
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
post #2764 of 7150
Thread Starter 
Quote:
Originally Posted by ShadowFox19 View Post
Thanks man! I have a few more noob questions:

- Where do I get the drivers?

- Can I flash from my current drivers (I'm not at my desktop right now so I don't know which one I'm currently running) to the most recent, or do I have to flash every version between where I'm at now to the most recent?

- Lastly, how do I flash the drivers?

I appreciate all the help!

You mean firmware.

Directions to flash the firmware are in the OP.
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
post #2765 of 7150
Quote:
Originally Posted by DuckieHo View Post
Interesting.... I assume this would be due to additional overhead of the more complex controller? How much more latency would be added since the controller is running in the MHz range?
I am not sure the reasons but I think it may be something to do with the striping of data across 2 drives. When accessing a file on a single disk, only 1 seek is required, but with 2 disks in RAID unless the data you request is very small both disks will need to seek to get it. As 2 seeks are involved this puts up the average access time slightly. The differences can be minimal (0.1ms increase), but I have seen increases of 0.5ms or more on 2 drives which can start to make a difference. The penalty also goes up as you add more drives - you can see the effects here. That test is pretty old, but it was done on an ARC-1220 which was not a slow controller, and the extra disks make a huge difference. No actual single drive access time is listed, and the drives were short-stroked so manf data is useless, but going from 2-8 drives in RAID0 takes the access time from 11.1ms to 30.8ms - tripling the wait time.

There may also be an impact as the controller has to determine which of the drives contains the data before it requests it. I am not sure about this, but it would seem that this would explain the very poor access times as the number of discs in the array goes up.

Conversley, with a really well designed controller, it should be possible to reduce the access time with RAID1. Because both disks hold identical data the drive that can perform the quickest seek can be used to retrieve it. I have not actually seen any evidence of this however, but some controllers may support it (I remember reading an onboard RAID comparison review of ICH10R & some NVidia chipset that I thought tested this, but I can't find it now).

Regardless of the exact reasons though, bottom line is RAID = poorer access times.
post #2766 of 7150
Thread Starter 
Quote:
Originally Posted by the_beast View Post
I am not sure the reasons but I think it may be something to do with the striping of data across 2 drives. When accessing a file on a single disk, only 1 seek is required, but with 2 disks in RAID unless the data you request is very small both disks will need to seek to get it. As 2 seeks are involved this puts up the average access time slightly. The differences can be minimal (0.1ms increase), but I have seen increases of 0.5ms or more on 2 drives which can start to make a difference. The penalty also goes up as you add more drives - you can see the effects here. That test is pretty old, but it was done on an ARC-1220 which was not a slow controller, and the extra disks make a huge difference. No actual single drive access time is listed, and the drives were short-stroked so manf data is useless, but going from 2-8 drives in RAID0 takes the access time from 11.1ms to 30.8ms - tripling the wait time.
I'm not sure if that addition of seek theory holds though.... The seeks are done in near parallel. HD1 performs a seek while HD2 performs a seek... therefore the seek time is just dependent on the slowest seek of the entire array.

For example.... HD1 takes 8ms, HD2 take 15ms, and HD3 takes 12ms... the total seek time is 15ms. If anything RAID0 should lower average seek times since the slowest drive is always going to be slow but the faster drives lower the average time.

Is this reasoning correct?
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
Once again...
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 920 [4.28GHz, HT] Asus P6T + Broadcom NetXtreme II VisionTek HD5850 [900/1200] + Galaxy GT240 2x4GB G.Skill Ripjaw X [1632 MHz] 
Hard DriveOSMonitorKeyboard
Intel X25-M 160GB + 3xRAID0 500GB 7200.12 Window 7 Pro 64 Acer H243H + Samsung 226BW XARMOR-U9BL  
PowerCaseMouseMouse Pad
Antec Truepower New 750W Li Lian PC-V2100 [10x120mm fans] Logitech G9 X-Trac Pro 
  hide details  
Reply
post #2767 of 7150
Quote:
Originally Posted by Blinky7 View Post
I decided to give up on the SAS idea. I searched for the cables and its too much of an inconvinienve. I must practically sacrifice 4 ports to use the SAS discs as I found no cable that provides 2 SAS and 2 SATA or 1+3 ports. But even if they existed, the cost adds up, and the drives from ebay have 0 warranty. I decided to let it go...
The SAS cables work fine on SATA drives, it is just you cannot use SATA cables on SAS drives. SAS connectors are great to use with SATA - using 1 connector instead of 2 makes everything nice and secure.

The lack of warranty is a pain on eBay stuff, but I got my 4 73GB 10K.2 drives for less than I could have bought a single new 250GB bog-standard SATA. Sometimes cheap is worth a punt...

Quote:
Originally Posted by Blinky7 View Post
I see your point about the Green drives, and although I aggree, I don't pay the power bill so the gain is essentially mute to me, I prefer to squeese as much pwerformance from the raid as I can...
Dare I ask how/why you don't pay the bill? I am assuming you live with parents. Not sure about your household, but if I had bought a new toy then suddenly our power bill jumped up I know who would have got it in the neck. Not to mention that, sooner or later, you will move and have to pay your own bills. How long do you expect your storage array to last you? At least until the 5 year warranty runs out?

Sorry for sounding a little like one of your parents...
post #2768 of 7150
Quote:
Originally Posted by DuckieHo View Post
I'm not sure if that addition of seek theory holds though.... The seeks are done in near parallel. HD1 performs a seek while HD2 performs a seek... therefore the seek time is just dependent on the slowest seek of the entire array.

For example.... HD1 takes 8ms, HD2 take 15ms, and HD3 takes 12ms... the total seek time is 15ms. If anything RAID0 should lower average seek times since the slowest drive is always going to be slow but the faster drives lower the average time.

Is this reasoning correct?
I think your reasoning should be correct - I started my post talking about going from 1 drive to 2 drives, thinking that requiring 2 seeks would automatically push up the average time, as you would always be waiting for the longer seek to take place (as you state). I then found the link that showed the massive seek times for the larger arrays, which surprised me somewhat - I knew there was an increase but never expected it to be that much. I think maybe the controller may have at least some effect - I am not sure where the location of each data 'chunk' is stored (ie which disk), so determining that could be the reason why larger arrays have long access times.
post #2769 of 7150
Quote:
Originally Posted by bigstretch View Post
Looks like I'll have no choice but to purchase an ASUS P5Q-E (as it seems to have what I need)
I have ASUS P5Q-E and everything works fine PERC 5/i installed in second PCIE slot - pin mod on 5&6 pin
post #2770 of 7150
Quote:
Originally Posted by zha50 View Post
They array is mainly used for storing video files/anime and disk images. Almost all the files are 200mb+.

Currently running RAID5, Stripe 256KB, formatted block 8K. Before the array gets full and have no more free space to shuffle data around, what settings do you recommend for this situation?

Mainly after more speed in multiple reads. In a file server with a dual port 1GB NIC. Want to be able to pump out close to 200mb/s from 2 read operations on the array.

Change to a 512kb stripe and larger formatting block size?
if you know that you will have only very big files - you can select bigger stripe and bigger block size - with LSI fw you can select up to 1024MB stripe size - but you can't go back to DELL fw without recreating array ...

edited:
sorry should be 1024KB stripe size
Edited by RobiNet - 7/1/09 at 11:04am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks