Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 376

post #3751 of 7150
Quote:
Originally Posted by XZed View Post
As i didn't find clearly if SMBus issue concerned all PERC cards or only PERC 5/i, i preferred to be careful and bought a nvidia chipset motherboard (Gigabyte GA-73PVM-S2H... also waiting for delivery ).
The 6/i does still need the pinmod as with the 5/i. I don't know whether or not that board needs it, but it only takes 2 mins either way.

Quote:
Originally Posted by XZed View Post
After reading much articles, i decided to opt to RAID 6.

Indeed, my priority is : reliability (only storage home server) and don't care about ultra-high performance (why care ? bottlenecked by gigabit network).

So, i think, RAID 6 is a good choice : any other advice ?

Indeed, my big fear : data loss (as all people, i know...)
...
* Does RAID6 seem to you a good mode for reliability ? Or do you advice some better "reliability/disk space" ratio raid mode ?
RAID6 is the best single-level choice for redundancy, period. No other scheme allows you to survive ANY 2 disk failures before the array fails. Large RAID10 arrays can allow more disk failures before the array becomes degraded, but you are reliant on specific disks failing - and just 2 failures can bring down everything. RAID6 gives you the flexibility that any 2 drives can fail without bringing everything down.

Obviously RAID60 offers slightly more protection, but at the cost of more parity drives. It is a waste unless running an expander with a lot more disks.

Quote:
Originally Posted by XZed View Post
And now, i'm actually thinking about the following part : choosing the hard drives...

I know that raid editions are highly recommended but, for budget reasons, i was thinking the following :

perhaps the extra security brought by raid 6 mode will "permit" me to use consumer-grade drives instead of expensive raid edition drives ?
...
* Does it permit me, carelessly, to use cheap SATA drives instead of expensive RAID editions ?
Actually the problem with using consumer drives is not really the failure rates - it is their inability to enable TLER (or its equivalents). The failure rates offered by enterprise RAID drives are better (on paper at least), but the real problem is that non-RAID adapted drives can be dropped from an array without failing if they go into a deep error-recovery cycle. There currently is not a good solution to this problem since newer WD drives don't allow the TLER utility any more, and no other vendor offers such a tool (to my knowledge), and no other method exists to set the TLER option persistently.

Quote:
Originally Posted by XZed View Post
Before all, i just want things to be clear : RAID ISN'T A BACKUP SOLUTION !!!
Nice to know you realise this. Many don't.

Quote:
Originally Posted by XZed View Post
* After many readings and after calculating, i conclude is better to build a raid with much smaller drives than a few big drives (a bunch of 250/500 Go drives instead 4 1To drives), am i right ? In order also to optimize / use cleverly RAID 6 mode...
Sadly this bit is wrong. More drives=more chances for a single drive in the array to fail. Double the number of drives = double the chance of getting a failure (at least as far as makes a difference here, with similar drive failure rates and low numbers of drives etc).

4 2TB drives in RAID6 (4TB useable) will have a much lower chance of failing than 8 640GB drives (4TB useable), and the drive cost will be roughly similar also. You also save significantly on power and noise.

The down side to using larger drives is that rebuilds take longer, and you increase the chance of getting a NRE during a rebuild. But that is really what the extra parity drive RAID6 offers is designed to protect against.

Using fewer, larger disks also makes the option to use a hotspare more realistic, as you can still get larger array capacities despite losing 3 disks to parity+hotspare. This mitigates the chance of failure during a rebuild, as it starts the rebuild straight away rather than having the disks sitting there waiting to fail while you replace the damaged disk. Whether or not it is worthwhile or not depends on your own array usage.

Quote:
Originally Posted by XZed View Post
* Which drives set do you advice ? I've found some people using WD6400AAKS sets with PERC cards... My searches drived me to choosing between Spinpoint F1/F3 sets and WD Caviar sets...
Drive choice is really tricky now. It used to be simple - buy WD (Green for storage, Black for OS or apps), enable TLER before building the array, and away you go. However this is now not an option for drives manufactured in the last few months. If you can find a source for older WDs then this might be an option (not sure if they are actually still making the smaller drives - so this older stock is likely to work best in RAID).

The best choice right now for a storage array is probably the 5-platter Hitachi 2TB disks. Everything I have read points to them being most reliable for RAID use as they seem not to suffer from TLER-related problems, although they run hotter, use more power and are louder than the competition. They are quite cheap though.

Hope this helps a little.
Edited by the_beast - 1/3/10 at 3:05am
post #3752 of 7150
Quote:
Originally Posted by DJZeratul View Post
it seems like the flash was unsuccessful... if you keep getting firmware errors as you mentioned before, it is going to have issues when dealing with the arrays you create, and especially present issues when trying to access the card BIOS. I would say at this point since you tried flashing both firmwares, its most likely a fault on the card, most likely with the card's Flash ROM firmware allocation or something similar...

Also, just a thought... have you tried it without the pin mod? I have a P6T Deluxe and I don't need the pin mod.
Hey thanks for the reply. I actually have a special bios (0006) on my P6T Deluxe v2 that was created by asus for the xtremesystems.org people which doesn't have turbo multiplier throttling. I run a 24/7 4.2ghz, I could probably get away with going with the newest bios for the v2 board ( or the non v2 since they are the same just the v2 doesn't have sas onboard). I just flashed to the previous version of dell's firmware. All was good but after letting it sit in MSM for a while, it's again frozen up. Is it ok to use MSM to manage our PERC 6 in windows 7? I'm using the 5.00.12 version.

When I first plugged the card in (without the pinmod) I got 4GB of my 6GB recognized. I wouldn't be surprised if that has something to do with my bios version on the p6t deluxe v2. What bios version are you running on your p6t? I could run the same one technically, would help me troubleshoot this.

I'm not entirely convinced the controller is bad yet, so I'm trying to make sure 100% before I go through the hassle of returning it and getting another one.
post #3753 of 7150
Quote:
Originally Posted by error10 View Post
How often do you expect your drives to fail? I opted to stuff my card full of SAS drives; I imagine your data center is about the same.
Can you explain you real thought ?

Do you mean that SAS drives are the right ones for RAID ? How do they behave related to TLER features ? (After a quick search on Seagate, indeed i only find "ES.2" models...).

Thank you.
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
post #3754 of 7150
Sorry for such a question but even after reading a few times this thread i can't understand the reason why to flash PERC fw with LSI one ?

Is it necessary (reasons ?) ?

Thank you.

P.S. : Even for PERC 6/i ?
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
post #3755 of 7150
Can somone point me in the right direction? Where can i get a bracket?

Jon
post #3756 of 7150
Quote:
Originally Posted by XZed View Post
Can you explain you real thought ?

Do you mean that SAS drives are the right ones for RAID ? How do they behave related to TLER features ? (After a quick search on Seagate, indeed i only find "ES.2" models...).

Thank you.
SAS drives are generally much more reliable under heavy loads than SATA drives (on paper at least), and are usually designed to run in arrays.

Quote:
Originally Posted by XZed View Post
Sorry for such a question but even after reading a few times this thread i can't understand the reason why to flash PERC fw with LSI one ?

Is it necessary (reasons ?) ?

Thank you.

P.S. : Even for PERC 6/i ?
The LSI firmware is used for a few more options (larger stripe sizes etc) and higher performance on the 5-series cards.

As there is no direct LSI equivalent to the 6-series cards I would not flash an LSI BIOS to one of them. I have no idea if doing so gives any better performance or options, but I would not try it as the effect on the card is potentially fatal.

Just my 2 cents...
post #3757 of 7150
Quote:
Originally Posted by the_beast View Post
The 6/i does still need the pinmod as with the 5/i. I don't know whether or not that board needs it, but it only takes 2 mins either way.
I'll feedback here to say if it was necessary.


Quote:
Originally Posted by the_beast View Post
RAID6 is the best single-level choice for redundancy, period. No other scheme allows you to survive ANY 2 disk failures before the array fails. Large RAID10 arrays can allow more disk failures before the array becomes degraded, but you are reliant on specific disks failing - and just 2 failures can bring down everything. RAID6 gives you the flexibility that any 2 drives can fail without bringing everything down.

Obviously RAID60 offers slightly more protection, but at the cost of more parity drives. It is a waste unless running an expander with a lot more disks.
Thanks to confirm my thoughts (RAID6 vs RAID10 vs RAID60)...

I'm too afraid of stripping in any raid level (refer to your RAID10 scheme)...

I'll keep RAID6 choice so.


Quote:
Originally Posted by the_beast View Post
Actually the problem with using consumer drives is not really the failure rates - it is their inability to enable TLER (or its equivalents). The failure rates offered by enterprise RAID drives are better (on paper at least), but the real problem is that non-RAID adapted drives can be dropped from an array without failing if they go into a deep error-recovery cycle. There currently is not a good solution to this problem since newer WD drives don't allow the TLER utility any more, and no other vendor offers such a tool (to my knowledge), and no other method exists to set the TLER option persistently.
Indeed, i didn't explained it well but was thinking about raid drops due to missing TLER...

But was thinking RAID6 permitted me to be more tolerant with drives choice due to extra security...

But, apart WD, was thinking that other manufacturers permitted to modify this feature.


Quote:
Originally Posted by the_beast View Post
Nice to know you realise this. Many don't.
Indeed, many don't realize this and have surprises...


Quote:
Originally Posted by the_beast View Post
Sadly this bit is wrong. More drives=more chances for a single drive in the array to fail. Double the number of drives = double the chance of getting a failure (at least as far as makes a difference here, with similar drive failure rates and low numbers of drives etc).

4 2TB drives in RAID6 (4TB useable) will have a much lower chance of failing than 8 640GB drives (4TB useable), and the drive cost will be roughly similar also. You also save significantly on power and noise.

The down side to using larger drives is that rebuilds take longer, and you increase the chance of getting a NRE during a rebuild. But that is really what the extra parity drive RAID6 offers is designed to protect against.

Using fewer, larger disks also makes the option to use a hotspare more realistic, as you can still get larger array capacities despite losing 3 disks to parity+hotspare. This mitigates the chance of failure during a rebuild, as it starts the rebuild straight away rather than having the disks sitting there waiting to fail while you replace the damaged disk. Whether or not it is worthwhile or not depends on your own array usage.
Was thinking too about "more drives = more failure chances", but with your explanations, i realized what was wrong while calculating :

I insisted on space disk "lost" due to parity but wasn't taking care of less usable space with smaller drives ... Sorry and thanks....


Quote:
Originally Posted by the_beast View Post
Drive choice is really tricky now. It used to be simple - buy WD (Green for storage, Black for OS or apps), enable TLER before building the array, and away you go. However this is now not an option for drives manufactured in the last few months. If you can find a source for older WDs then this might be an option (not sure if they are actually still making the smaller drives - so this older stock is likely to work best in RAID).

The best choice right now for a storage array is probably the 5-platter Hitachi 2TB disks. Everything I have read points to them being most reliable for RAID use as they seem not to suffer from TLER-related problems, although they run hotter, use more power and are louder than the competition. They are quite cheap though.

Hope this helps a little.
Well, i'm relieved that this choice is tricky : i was getting mad but understand why now

Hitachi 2To didn't suffer from TLER-related problems ??? Nice

Thank you very much for your explanations !

Sincerely,

XZed
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
bubulle_b0x
(13 items)
 
  
CPUMotherboardRAMHard Drive
Intel Pentium Dual-Core E5300 Gigabyte GA-73PVM-S2H 2 Go DDR2-800 CL6 250Go ES.2 - PERC 6/i : 8*1To 7200.12 (RAID6) 
Optical DriveOSPowerCase
Optiarc AD-7240S Windows 7 Corsair CX400 Cooler Master Elite 310 
  hide details  
Reply
post #3758 of 7150
Quote:
Originally Posted by XZed View Post
I'm too afraid of stripping in any raid level (refer to your RAID10 scheme)...

I'll keep RAID6 choice so.
Not to scare you - but RAID6 is a striping level. It just uses parity also.


Quote:
Originally Posted by XZed View Post
But, apart WD, was thinking that other manufacturers permitted to modify this feature.
I know that you cn change CCTL (Samsung's version of TLER) with HDParm under Linux. But the change is lost on a reboot and can't be performed on disks in arrays through any RAID controller I know of. So it limits its usefulness somewhat, especially for those wanting hardware RAID under Windows. Might be of use to someone considering a software RAID (or RAIDZ) system though, if you can run HDParm on startup.

Quote:
Originally Posted by XZed View Post
Hitachi 2To didn't suffer from TLER-related problems ??? Nice
Not necessarily. But the only people I know of using them in large arrays (and that I trust the opinion of) seem satisfied with their performance and recommend them. I do not actually know how Hitachi have implemented TLER (or whatever they call it). However they seem to be the best bet, and do not drop out like non-TLER-enabled WD disks or non-CCTL-enabled F3s tend to. As a result they are likely to be the drives I use to build my next hardware storage array with.

Quote:
Originally Posted by XZed View Post
Thank you very much for your explanations !
No problem. Nice to be helpful...
post #3759 of 7150
Quote:
Originally Posted by selece View Post
Your sig says Vista 64 bit, not Windows 7 64bit

DJZeratul had win7 64bit OS, but Perc 6/i
LMAO I forgot to change it. But yes. WIN 7 and PERC 5i here.
The Mars Project
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9650 4.1Ghz for now Foxconn Mars 4870 X2 Gskill 2 x 2GB 
Hard DriveOSMonitorPower
2x750F1's raid0 on PERC 5i Win 7 64bit Samsung T240 Antec Quattro 850w 
CaseMouseMouse Pad
TT Armor MX-518 Ma Desk 
  hide details  
Reply
The Mars Project
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q9650 4.1Ghz for now Foxconn Mars 4870 X2 Gskill 2 x 2GB 
Hard DriveOSMonitorPower
2x750F1's raid0 on PERC 5i Win 7 64bit Samsung T240 Antec Quattro 850w 
CaseMouseMouse Pad
TT Armor MX-518 Ma Desk 
  hide details  
Reply
post #3760 of 7150
Quote:
Originally Posted by GregOP83 View Post
Hey thanks for the reply. I actually have a special bios (0006) on my P6T Deluxe v2 that was created by asus for the xtremesystems.org people which doesn't have turbo multiplier throttling. I run a 24/7 4.2ghz, I could probably get away with going with the newest bios for the v2 board ( or the non v2 since they are the same just the v2 doesn't have sas onboard). I just flashed to the previous version of dell's firmware. All was good but after letting it sit in MSM for a while, it's again frozen up. Is it ok to use MSM to manage our PERC 6 in windows 7? I'm using the 5.00.12 version.

When I first plugged the card in (without the pinmod) I got 4GB of my 6GB recognized. I wouldn't be surprised if that has something to do with my bios version on the p6t deluxe v2. What bios version are you running on your p6t? I could run the same one technically, would help me troubleshoot this.

I'm not entirely convinced the controller is bad yet, so I'm trying to make sure 100% before I go through the hassle of returning it and getting another one.
I am using a pretty old BIOS revision, rev. 1102 from December 2008

Using MSM is fine on Win7, I use it all the time and it works like a charm.

As for your custom BIOS, how does it work to remove the turbo multiplier throttling? I have an option on my BIOS revision that allows you to turn off Turbo Mode (Which I do--because going into turbo mode causes a BSOD on my system no matter how many volts I pump into the CPU), does yours do something different than that?
Edited by DJZeratul - 1/3/10 at 12:12pm
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks