Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 531

post #5301 of 7150
You mean that the cache is retained through a bsod ?
Crosshair
(19 items)
 
Achilles
(20 items)
 
 
CPUMotherboardGraphicsRAM
Phenom II X6 1090T ASUS Crosshair V Formula-Z AMD Firepro W8000 G.Skill TridentX 2400C9 8GB kit 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Seagate ST3000DM001 WD WD6400AAKS Sony BWU-500S BD-RW 
CoolingCoolingCoolingCooling
D-Tek Fuzion V2 EK CHVF-Z Block D5 Pump  Phobya 1080 
CoolingOSMonitorPower
Phobya Nano-G 12 Silent Waterproof Win 7 Ultimate x64 LG W2600HP-BF Seasonic M12D-750 
CaseMouseMouse Pad
Corsair Carbide 300R Microsoft Arc Touch Razer Sphex 
CPUMotherboardGraphicsRAM
Phenom II X4 980BE ASUS M4A79 Deluxe Sapphire HD3870 OCZ Blade LV 1150 
Hard DriveOptical DriveCoolingCooling
Crucial M4 NEC AD-7191A D-Tek Fuzion V2 Koolance VID-387 w/ VVR-387SP 
CoolingCoolingCoolingCooling
Swiftech 655 Black ICE GTS360 Black Ice GTS120 Black Ice GTS120 
OSMonitorPowerCase
Win 7 Ultimate x64 LG W2600HP-BF Coolermaster Real Power Pro 550W Coolermaster RC-690 mod 
MouseOther
Microsoft WIreless Laser 6000 V2.0 Scythe Kaze Flat 
  hide details  
Reply
Crosshair
(19 items)
 
Achilles
(20 items)
 
 
CPUMotherboardGraphicsRAM
Phenom II X6 1090T ASUS Crosshair V Formula-Z AMD Firepro W8000 G.Skill TridentX 2400C9 8GB kit 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Seagate ST3000DM001 WD WD6400AAKS Sony BWU-500S BD-RW 
CoolingCoolingCoolingCooling
D-Tek Fuzion V2 EK CHVF-Z Block D5 Pump  Phobya 1080 
CoolingOSMonitorPower
Phobya Nano-G 12 Silent Waterproof Win 7 Ultimate x64 LG W2600HP-BF Seasonic M12D-750 
CaseMouseMouse Pad
Corsair Carbide 300R Microsoft Arc Touch Razer Sphex 
CPUMotherboardGraphicsRAM
Phenom II X4 980BE ASUS M4A79 Deluxe Sapphire HD3870 OCZ Blade LV 1150 
Hard DriveOptical DriveCoolingCooling
Crucial M4 NEC AD-7191A D-Tek Fuzion V2 Koolance VID-387 w/ VVR-387SP 
CoolingCoolingCoolingCooling
Swiftech 655 Black ICE GTS360 Black Ice GTS120 Black Ice GTS120 
OSMonitorPowerCase
Win 7 Ultimate x64 LG W2600HP-BF Coolermaster Real Power Pro 550W Coolermaster RC-690 mod 
MouseOther
Microsoft WIreless Laser 6000 V2.0 Scythe Kaze Flat 
  hide details  
Reply
post #5302 of 7150
Quote:
Originally Posted by felix;13670132 
You mean that the cache is retained through a bsod ?

That was my impression at least... you should only lose the cache when power is completely cut from the card. The BBU provides the backup power needed to keep the cache active, but it isn't required when the card is being powered.

That being said, a BSOD that requires a hard power off, wait 5 seconds, power back on, would lose the cache. But a flick of the reset switch shouldn't cut power to the card at all, i wouldn't think so at least.

I have no proof to back this up but it just wouldn't make logical sense that the cache would be flushed on a bsod without cutting card power.
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
post #5303 of 7150
Maybe dell documents for perc's specify that somewhere...i'll check it up..
Crosshair
(19 items)
 
Achilles
(20 items)
 
 
CPUMotherboardGraphicsRAM
Phenom II X6 1090T ASUS Crosshair V Formula-Z AMD Firepro W8000 G.Skill TridentX 2400C9 8GB kit 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Seagate ST3000DM001 WD WD6400AAKS Sony BWU-500S BD-RW 
CoolingCoolingCoolingCooling
D-Tek Fuzion V2 EK CHVF-Z Block D5 Pump  Phobya 1080 
CoolingOSMonitorPower
Phobya Nano-G 12 Silent Waterproof Win 7 Ultimate x64 LG W2600HP-BF Seasonic M12D-750 
CaseMouseMouse Pad
Corsair Carbide 300R Microsoft Arc Touch Razer Sphex 
CPUMotherboardGraphicsRAM
Phenom II X4 980BE ASUS M4A79 Deluxe Sapphire HD3870 OCZ Blade LV 1150 
Hard DriveOptical DriveCoolingCooling
Crucial M4 NEC AD-7191A D-Tek Fuzion V2 Koolance VID-387 w/ VVR-387SP 
CoolingCoolingCoolingCooling
Swiftech 655 Black ICE GTS360 Black Ice GTS120 Black Ice GTS120 
OSMonitorPowerCase
Win 7 Ultimate x64 LG W2600HP-BF Coolermaster Real Power Pro 550W Coolermaster RC-690 mod 
MouseOther
Microsoft WIreless Laser 6000 V2.0 Scythe Kaze Flat 
  hide details  
Reply
Crosshair
(19 items)
 
Achilles
(20 items)
 
 
CPUMotherboardGraphicsRAM
Phenom II X6 1090T ASUS Crosshair V Formula-Z AMD Firepro W8000 G.Skill TridentX 2400C9 8GB kit 
Hard DriveHard DriveHard DriveOptical Drive
Samsung 840 Pro Seagate ST3000DM001 WD WD6400AAKS Sony BWU-500S BD-RW 
CoolingCoolingCoolingCooling
D-Tek Fuzion V2 EK CHVF-Z Block D5 Pump  Phobya 1080 
CoolingOSMonitorPower
Phobya Nano-G 12 Silent Waterproof Win 7 Ultimate x64 LG W2600HP-BF Seasonic M12D-750 
CaseMouseMouse Pad
Corsair Carbide 300R Microsoft Arc Touch Razer Sphex 
CPUMotherboardGraphicsRAM
Phenom II X4 980BE ASUS M4A79 Deluxe Sapphire HD3870 OCZ Blade LV 1150 
Hard DriveOptical DriveCoolingCooling
Crucial M4 NEC AD-7191A D-Tek Fuzion V2 Koolance VID-387 w/ VVR-387SP 
CoolingCoolingCoolingCooling
Swiftech 655 Black ICE GTS360 Black Ice GTS120 Black Ice GTS120 
OSMonitorPowerCase
Win 7 Ultimate x64 LG W2600HP-BF Coolermaster Real Power Pro 550W Coolermaster RC-690 mod 
MouseOther
Microsoft WIreless Laser 6000 V2.0 Scythe Kaze Flat 
  hide details  
Reply
post #5304 of 7150
Quote:
Originally Posted by felix;13670260 
Maybe dell documents for perc's specify that somewhere...i'll check it up..

Hardware Installation and Configuration
Dell™ PowerEdge™ Expandable RAID Controller 5/i and 5/E User's Guide


Transferring a TBBU Between Controllers (PERC 5/e)


The TBBU provides uninterrupted power supply to the memory module for up to 72 hours if power supply is unexpectedly interrupted while cached data is still present. If the controller fails as a result of a power failure, you can move the TBBU to a new controller and recover the data. The controller that replaces the failed controller must be devoid of any prior configuration.

Disconnecting the BBU from a PERC 5/i Adapter or a PERC 5/i

Determine whether the dirty cache LED on the controller is illuminated.

* If the LED is illuminated, replace the system cover, reconnect the system to power, turn on the system, and repeat step 1 and step 2. See Figure 3-10.


I think BBU on PERC 5/i should also keep data in cache for few hours ... Otherwise - why it have so big capacity ?


===================

Here is something more about PERC 6/i smile.gif Should be also true for PERC 5/i wink.gif

Battery Management

The Transportable Battery Backup Unit (TBBU) is a cache memory module with an integrated battery pack that enables you to transport the cache module with the battery into a new controller. The TBBU protects the integrity of the cached data on the PERC 6/E adapter by providing backup power during a power outage.

The Battery Backup Unit (BBU) is a battery pack that protects the integrity of the cached data on the PERC 6/i adapter and PERC 6/i Integrated controllers by providing backup power during a power outage.

Battery Warranty Information

The BBU offers an inexpensive way to protect the data in cache memory. The lithium-ion battery provides a way to store more power in a smaller form factor than previous batteries.

The BBU shelf life has been preset to last six months from the time of shipment without power. To prolong battery life,

* Deploy the BBU within six months of ship date.
* Do not store the BBU above 60 degrees celsius.
* Disconnect the BBU if the system is going to be turned off (power disconnected) for longer than one week.

The battery may provide upto 72 hours for a 256-MB controller cache memory backup power and upto 48 hours for a 512-MB cache when new. Under the one year limited warranty, Dell warrants that the battery provides at least 24 hours of backup coverage. It is recommended that you replace the battery at the end of the warranty period.
post #5305 of 7150
Quote:
Originally Posted by DJZeratul;13667204 
I am puzzled as to why you would buy a hardware RAID card to run a software RAID solution. I do like the idea of ZFS and it does seem to be a very robust integrity solution, but it is not built for the speed you could theoretically be getting by creating a RAID5 out of your current disks using the PERC 5/i. ZFS can be set up with a bunch of disks on a standard SATA controller, and actually does not need much more than that to operate.

...

After all is said and done, I would truly recommend using the card's built in RAID5 because it is much faster than a software solution designed to run on a standard SAS/SATA controller card.

...

If you truly want to use ZFS, I would send the PERC back and just get a standard SATA controller card with the amount of ports you need, since it will be a lot cheaper than a PERC 5/i. A standard Dell SAS 5/iR would run you less than half the price and do what you need it to do, and you could probably even pick up a highpoint controller that did the same thing for even less.

ZFS isn't really designed for speed, but it's no slouch. In fact I'd bet with the same 8 drives ZFS would be faster for many operations than a PERC 5/i on most systems.

The real advantage that ZFS has is the ability to run drives on multiple controllers - so if your bottleneck is your hardware XOR processor you simply add a second card and spread your drives out - hey presto, no more bottleneck. Modern CPUs and buses are so fast that trivial things like parity calculations are handled without breaking a sweat even on low end systems, and the only real drivers to go with hardware RAID is the ease of recovery (everything is handled for you in nice GUIs, even when problems start occuring) and the fact you can use Windows (as its software options are crap).

Basically if you have a reasonable amount of RAM, a 64-bit capable CPU and OS (really required for serious ZFS use) then a software system has the capability to be much faster than a hardware one. Any hardware one...

Regarding the controller advice - much better to replace the PERC than try to use it for something it's not designed for (ie passthrough), but I wouldn't get a 5/iR (4 ports, bridge chip wastes power, slow). I'd go for a 6/iR (8 ports, native PCIe) or one of the IBM or Intel versions of the card (as they tend to be cheaper), or one of the newer LSI OEM cards (IBM's version again tends to be cheaper on eBay). Wouldn't really recommend Highpoint generally, but some of their cards are decent and they offer a few features that some of the more 'enterprise' systems don't (such as array spin down in idle) so they aren't a bad option overall.
Quote:
Originally Posted by DJZeratul;13667204 
As long as you skip initialization on the last step, you wont lose your data. But you would have to format the disks when converting to ZFS anyway, so you're going to run into a problem here regardless. You probably need to find a place to put the data in the meantime while you create your new array, whatever it may be.

This.
Quote:
Originally Posted by DJZeratul;13667204 
No, a single disk RAID0 acts exactly like a pass-through.

Not quite - if you replace a failed drive, your software array won't be able to tell that the 'new' drive is meant to be the replacement, since it will have no relation to the one that failed (it will appear to the OS as unrelated). So you would need to manually restart the rebuild, and this adds a good bit of complexity etc in case of a future drive failure.
Quote:
Originally Posted by DJZeratul;13670171 
That was my impression at least... you should only lose the cache when power is completely cut from the card. The BBU provides the backup power needed to keep the cache active, but it isn't required when the card is being powered.

That being said, a BSOD that requires a hard power off, wait 5 seconds, power back on, would lose the cache. But a flick of the reset switch shouldn't cut power to the card at all, i wouldn't think so at least.

I have no proof to back this up but it just wouldn't make logical sense that the cache would be flushed on a bsod without cutting card power.

Cache contents should be present on a cold boot also, provided power has not been lost to the motherboard. The cache is maintained using the +5VSB power supplied to the card by the mobo even when the system is not running.

I still would really recommend not running without a battery though - add up the cost of the controller, HDDs and all the other server components and you'll see the BBU is a really tiny part of the overall investment but it adds another layer of security for your data.
post #5306 of 7150
Thanks for a great reply. biggrin.gif
Quote:
Originally Posted by DJZeratul;13667204 
I am puzzled as to why you would buy a hardware RAID card to run a software RAID solution. I do like the idea of ZFS and it does seem to be a very robust integrity solution, but it is not built for the speed you could theoretically be getting by creating a RAID5 out of your current disks using the PERC 5/i. ZFS can be set up with a bunch of disks on a standard SATA controller, and actually does not need much more than that to operate.
I’m currently running an ESXI 4.1 whitebox and until now I have had ZFS handle all my storage. Some time ago I needed more SATA ports and therefore I was looking for a controller which worked with ESXi and wasn’t PCI. I got a recommendation to look at the PERC series and it had both PCI-E and could take 8 SATA drives.
I’m running ZFS mainly because as The beast stated its fast and it has some really nice features (like compression, zfs_crypto etc).
Quote:
Originally Posted by DJZeratul;13667204 
No, a single disk RAID0 acts exactly like a pass-through.
I see, problem is that as the beast stated rebuilding if a drive fails is complicated.
Quote:
Originally Posted by DJZeratul;13667204 
After all is said and done, I would truly recommend using the card's built in RAID5 because it is much faster than a software solution designed to run on a standard SAS/SATA controller card.

If you truly want to use ZFS, I would send the PERC back and just get a standard SATA controller card with the amount of ports you need, since it will be a lot cheaper than a PERC 5/i. A standard Dell SAS 5/iR would run you less than half the price and do what you need it to do, and you could probably even pick up a highpoint controller that did the same thing for even less.

Whatever it is you decide to do, let us know and we can of course be of further assistance smile.gif
Well I’m weighing between:
1) Make a “one disk RAID0” on all disks and then let ZFS do the RAID. Skip the force write-back.
2) Sell the Perc5/I and buy a new controller (alternatives at bottom)
3) Buy a battery, skip ZFS and make HW-raid instead.

Suggestions?
Quote:
Originally Posted by the_beast;13672762 
ZFS isn't really designed for speed, but it's no slouch. In fact I'd bet with the same 8 drives ZFS would be faster for many operations than a PERC 5/i on most systems.
…..
Basically if you have a reasonable amount of RAM, a 64-bit capable CPU and OS (really required for serious ZFS use) then a software system has the capability to be much faster than a hardware one. Any hardware one...

My whitebox hardware: AMD X4 2.8Ghz. Will upgrade to X6 or more when Bulldozer and new 900-chipset is released (for IOMMU). 12 GB ECC ram. I don’t think this should bottleneck ZFS.

Quote:
Originally Posted by the_beast;13672762 
Regarding the controller advice - much better to replace the PERC than try to use it for something it's not designed for (ie passthrough), but I wouldn't get a 5/iR (4 ports, bridge chip wastes power, slow). I'd go for a 6/iR (8 ports, native PCIe) or one of the IBM or Intel versions of the card (as they tend to be cheaper), or one of the newer LSI OEM cards (IBM's version again tends to be cheaper on eBay). Wouldn't really recommend Highpoint generally, but some of their cards are decent and they offer a few features that some of the more 'enterprise' systems don't (such as array spin down in idle) so they aren't a bad option overall.
I’ve checked the following cards. They are on ESXi HCL and should have pass-through:
DELL SAS 6/iR (LSI 1068e)
IBM ServeRAID BR10i (LSI1068E)
IBM ServeRAID M1015 (LSI SAS2008)

Do you have any other card to recommend or should i try to get my hands on one of the DELL SAS 6/ir or the IBM BR10i (they are the cheapest)?

Thanks for helping me out, I’m new at the server storage thingy but I’m eager to learn!smile.gif
post #5307 of 7150
Quote:
Originally Posted by legen;13674323 
snip

any of those cards will be ideal.

Regarding the cpu - a Sempron wouldn't bottleneck ZFS, let alone a quad...
post #5308 of 7150
Quote:
Originally Posted by the_beast;13674457 
any of those cards will be ideal.

Regarding the cpu - a Sempron wouldn't bottleneck ZFS, let alone a quad...


Found an M1015 auction on ebay and I bought it. I will let you know when it arrives and ill do some test to see if I will go with the PERC 5 or the M1015 wink.gif
post #5309 of 7150
Quote:
Originally Posted by legen;13674791 
Found an M1015 auction on ebay and I bought it. I will let you know when it arrives and ill do some test to see if I will go with the PERC 5 or the M1015 wink.gif

this one 180669203370 ? wink.gif
post #5310 of 7150
Quote:
Originally Posted by RobiNet;13679799 
this one 180669203370 ? wink.gif
stalker!
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks