PERC 5/i RAID Card: Tips and Benchmarks - Overclock.net - An Overclocking Community

Forum Jump: 

PERC 5/i RAID Card: Tips and Benchmarks

Reply
 
Thread Tools
post #1 of 7288 (permalink) Old 07-16-2008, 07:30 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)

PowerEdge Expandable RAID Controller (PERC) 5/i

 

Manuals: http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/index.htm


Specs:

  • Intel IOP333 Processor (11w TDP, 110Tj(max))
  • 256MB 400MHz ECC Registered DDR2 memory (upgradable*)
  • RAID levels 0, 1, 5, 10, and 50
  • PCIe x8
  • 2 (SFF-8484) SAS internal connectors (support for 8 drives)
  • LSI Manufactured (and flashable)
  • Xp, Vista 32/64 Supported
  • Does not support Native Command Queuing
  • Does not support 3TB+ drives

 

* 400MHz ECC-registered DIMMs with x16 DRAM components. Installing unsupported memory causes the system to hang at POST. You have to buy x8 or x16 Memory Modules:
x8 = 9 Chips (1 ECC)
x16 = 5 Chips (1 ECC)

 

For the PERC 5/i, you can find the last drivers/firmware on Dell's website. Look under downloads for the "Poweredge 2900 ".

 

Latest firmware version: A09 5.2.2-0072 (as of September 5, 2012)




Awesome thing is that this $400-700 card can be found on eBay regularly for $100. Since they are so common, there are usually 2-10 on sale at any given time. If you are looking for RAID5 or improving your RAID performance, this is card to get. The next closest card is going to cost $300+ unless you get lucky. Since SAS is downwards compatible with SATA, you just need the correct cable. A SAS SFF-8484 to 4xSATA cable runs for $10. Make sure to get pass-through. Do not get crossover or backplane cables: http://www.nowdirect.com/exec/partInfo/part_detail.tsb?prcpart=ADP2247600-R&categoryid=
 


PowerEdge Expandable RAID Controller (PERC) 6/i

 

Manuals: http://support.dell.com/support/edocs/storage/RAID/PERC6/en/UG/HTML/index.htm

PERC 5/i vs 6/i Whitepaper: www.dell.com/Downloads/Global/Power/ps2q08-20080255-Dixit.pdf

Benchmarks: http://en.community.dell.com/techcenter/storage/w/wiki/perc6-with-md1000-and-md1120-performance-analysis-report.aspx

 

Specs:

  • LSI SAS1078 RAID on Chip (ROC) 500MHz
  • Operational Temperature: 50C
  • Onboard 256MB of ECC Registered 667MHz DDR2 3-5-5-5
  • RAID levels 0, 1, 5, 6, 10, 50, and 60
  • PCIe x8
  • 2 (SFF-8484) SAS internal connectors (support for 8 drives) *Does not support SATA 1.5Gb/s
  • LSI Manufactured
  • Xp, Vista 32/64 Supported
  • Supports Native Command Queuing
  • Does not support 3TB+ drives

 

For the PERC 6/i, you can find the last drivers/firmware on Dell's website. Look under downloads for the "PowerEdge R900".

 

Latest firmware version: A14 6.3.1-0003 (as of September 5, 2012)



 


SMBus Issue with Intel Chipsets

 

These cards are known to have some compatibility issues with Intel chipsets. However, they are known to work with NVIDIA motherboards fine. The issue stems from a System Management Bus (SMBus) conflicting with the motherboard's memory detection. SMBus is simple signal to provide the motherboad some basic device information and control. Symptoms of the conflict includes improperly reported RAM sizes and POST errors.


The trick is just to physically disable the SMBus signal. It is composed of just two pins B5 (SMCLK, SMBus clock) and B6 (SMDAT, SMBus data). These two pins need to be covered by tape or nail polish. On the top side of the card, they are the 5th and 6th PCIe pins from the left. You can see the pins covered as seen below:


 


Forced Airflow is Required

 

Intel thermal specifications: http://download.intel.com/design/iio/applnots/30663002.pdf

The Tj maximum temperature is 110C. However, do NOT run the IOP333 passively. The heatsink needs force airflow. Intel's thermal analysis used a heatsink of the same size but with more fins (hence better). However, they require a minimum of 200LFM with there heatsink. Over the surface the PERC stock heatsink, that is at least 4CFM. Do realize that if you used a 80mm fan, you would need a higher CFM rating of around 16CFM. This is assuming that the 80mm fan is next to the HS.

Bottom Line: Make sure to force air cool the PERC 5/i CPU. The card is designed for Dell servers with forced air.
 


How to Flash the Dell PERC 5/i with LSI MegaRAID SAS 8480E Firmware

 

Note: Since the card uses a Dell Bootloader, there is a possible corruption as defined in Post #1343 (link). It is recommended that you use the Dell firmware unless it does not work.

The LSI version of this card is kept more up to date with bug fixes. Also, the LSI version has more software features so might as well get the better model.

 

The Windows version is the easiest to use so I'll provide those steps:
1) Download and extract the LSI flashing utility "MegaCLI - Windows"
2) Download the and extract the 7.0.1-0083 firmware
3) Place the .ROM file in the MegaCLI folder
3) Open a command prompt window
4) Navigate to the MegaCLI directory
5) Run the command: MegaCli -adpfwflash -f [firmware name].rom -a0


Latest LSI Firmware: 7.0.1-0083: (as of September 5, 2012)
http://www.lsi.com/downloads/Public/Obsolete/Obsolete%20Common%20Files/7.0.1-0083_SAS_FW_Image_APP-1.12.330-1300.zip

MegaCLI - 5.3
MegaCLI v8.04.07:
(as of September 5, 2012)

http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.04.07_MegaCLI.zip

 

Latest Windows Driver:  5.2.124: (as of September 5, 2012)
http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/5.2.124_Signed_Windows7_Driver.zip


MegaRAID Storage Manager - Windows - 5.3  
MegaRAID Storage Manager v12.05.03.00:
(as of October 23, 2012)
www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.08.03.03_Windows_MSM.zip
 

*The LSI firmware (7.0.1-0068) may cause the card to not be recognized on some systems. Reverting to 7.0.1-0051 or (supposedly) 7.0.1-0056 should resolve the issue.

 

Stuff for linux: https://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks/6000#post_18128110


Adapter to Convert Loop-Mounts into Screw-Mounts

 

This IOP heatsink is mounted using via the spring-loop mount. This type of mount was used on older chipset and is not common today. In addition, the older chipsets used 3" loop spacing but this card uses 2.5" spacing. Therefore, it is almost impossible to find a low profile heatsink that will fit on this cards.

However, there is this $5 kit that will convert the loops into screws to increase chipset HSF compatibility: http://www.epowerhousepc.com/microcool-hook-adapter-northpole-p-101.html


 


How to Disable Battery Backup Unit (BBU) Warning

 

Note: This option is no longer available in the latest Firmware. To get maximum performance, you will need a Battery Backup Unit.

The card will display the following message if a battery unit is not used/working. These batteries power the card and the RAM in case of system power loss. It lets the card clear out the RAM to prevent data loss/corruption before shutting down the HD.

"The battery hardware is missing or malfunctioning, or the battery is unplugged. If you continue to boot the system, the battery-backed cache will not function. Please contact technical support for assistance. Press 'D' to disable this warning (if your controller does not have a battery)."

Just press D and the message should not appear again. If it does, update your firmware.
 


More Benchmarks of PERC 5/i


Courtesy of Clubhouse: https://www.overclock.net/6091504-post2266.html


 


Benchmarks of PERC 5/i vs ICH9R

 

Test System:

  • [email protected]
  • 4GB DDR2-1095
  • Vista 64 (Defrag and SuperFetch Disabled)
  • OS HDD: Seagate 7200.10
  • Test HDD: 3x WD Raptor 8MB WD740GD-00FLC0
  • HDTune using 64KB Sectors


Terminology:

Read Ahead: system tries to predict what part of files will be needed and preloads them into memory. This boost sequential performance but hurts random performance.


Adaptive Read Ahead: Read Ahead when memory and I/O avaliable with prioritization. Balances sequential and random read performance but takes processing power.


Write Through: Writes from memory to HDs are performed only when the data is complete. Data is not accessible until written to HD.


Write Back: Writes are stored in memory and can be accessed from faster RAM. Data is written to HD when optimal. Performance increases but data is at risk in volitle memory. If power is lost, the stored data in the RAM is lost and never written to disk.


Degraded: RAID5 can operate with one HD missing. Performance is heavily impacted due to parity calculations.

 

Generalized Results:
RAID5 write speeds went from 55MB/s to 121MB/s.
RAID5 read speeds remain about the same but much more consistent.

RAID0 write speeds went from 165MB/s to 180MB/s but less consistent.
RAID 0 read speed went from 162MB/s to 182MB/s and MUCH more consistent.

CPU usage was greatly reduced and access time generally only slightly improved.

Please remember this are artificial tests that focus more on sequential performance.

 

Single Raptor
 

 

RAID5
 
ICH9R RAID5 w/ Caching [64KB]



ICH9R RAID5 w/o Caching [64KB]


ICH9R RAID5 Degraded w/ Caching [64KB] *Occasional BSoD*


PERC RAID5 (Write-Back, Adaptive-Read-Ahead) [64KB]


PERC RAID5 (Write-Back, Read-Ahead) [64KB]


PERC RAID5 (Adaptive-Read-Ahead) [64KB]


PERC RAID5 (Write-Back) [64KB]


PERC RAID5 Degraded (Write-Back, Adaptive-Read-Ahead) [64KB]

 

RAID0
 

ICH9R RAID0 w/ Caching [128KB]


ICH9R RAID0 w/o Caching [128KB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [128KB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [1MB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [8KB]
The poor performance is probably due to the 64KB block sizes used in the test. The controller had to access 8 sectors to retrive each block.


PERC RAID0 [128KB]

 


JustusIV Benchmarks of PERC 5/i

 

4x 1TB Samsung Spinpoints F1

RAID0


RAID5 With No Read Ahead


RAID5 With Adaptive Read Ahead


RAID5 With Read Ahead


RAID5 With Write Through


RAID5 With Write Back

 


Extra Cooling on My Card

 

CPU Heatsink removed:


I used a copper RAMsink on the RAID processor and a Thermaltake Spirit HS for the EC-RAM (courtesy of s1rrah). The card does get quite hot and I am thinking about replacing the CPU's HS.


Stock heatsink routed and cut up pieces of copper from a soldering tip.


Prepping an old laptop heatpipe/heatsink (notice the copper bits underneath as gap fillers)


Arctic Silver Adhesive to hold it together.


Ghettofied.


To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
Sponsored Links
Advertisement
 
post #2 of 7288 (permalink) Old 07-16-2008, 07:38 AM
New to Overclock.net
 
airbozo's Avatar
 
Join Date: Dec 2007
Location: Santa Cruz Mountains
Posts: 2,314
Rep: 150 (Unique: 121)
That is a great deal!

I have used that card in a couple of customer builds and it is the same logic built into the high end Dell Workstations.

FYI: In the supermicro mobo's (and possibly others), when doing a large data transfer (500gb+) across multiple drives, that card will get errors unless it is plugged into an x16 slot. (same with the LSI cards) Still trying to find out why from Dell and LSI...

"Remember, there is a big difference between kneeling down and bending over..."...Frank Zappa...

airbozo is offline  
post #3 of 7288 (permalink) Old 07-16-2008, 07:43 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)
Quote:
Originally Posted by airbozo View Post
That is a great deal!

I have used that card in a couple of customer builds and it is the same logic built into the high end Dell Workstations.

FYI: In the supermicro mobo's (and possibly others), when doing a large data transfer (500gb+) across multiple drives, that card will get errors unless it is plugged into an x16 slot. (same with the LSI cards) Still trying to find out why from Dell and LSI...
Do these run hot? I know Intel's new dual-core IOP (found on Adeptec's cards) run VERY hot at 80C+. Aftermarket cooling is recommended on those cards since they are known to thermal shutdown. I've seen someone watercool his RAID controller b/c of this... seriously.

To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
Sponsored Links
Advertisement
 
post #4 of 7288 (permalink) Old 07-19-2008, 12:45 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)
got my card! Updated pics.

Benchies coming after I get my cables.

To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
post #5 of 7288 (permalink) Old 07-19-2008, 12:52 AM
New to Overclock.net
 
Join Date: Dec 2006
Location: Schaumburg, Illinois
Posts: 2,463
Rep: 185 (Unique: 134)
This looks pretty sweet. I think I'm going to get one for my Home Server. The guy on Ebay is about 2 miles from where I live. Thanks Duckie!

Edit: Do you think you take some shots of the ports and such?


*Completed Projects: ChemX480 , Redux*
kennymester is offline  
post #6 of 7288 (permalink) Old 07-19-2008, 01:21 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)
Quote:
Originally Posted by kennymester View Post
This looks pretty sweet. I think I'm going to get one for my Home Server. The guy on Ebay is about 2 miles from where I live. Thanks Duckie!

Edit: Do you think you take some shots of the ports and such?

In most cases, you don't need one of these for a home server. These would be over kill in most cases.


The card has two SFF-8484 ports (aka 4xSAS). You can use these cables to support 4 SATA device per port: http://www.cross-mark.com/50cm-seria...ble-p-842.html


To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
post #7 of 7288 (permalink) Old 07-20-2008, 01:12 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)
Updated directions to flash it to a LSI MegaRAID SAS 8480E.

To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
post #8 of 7288 (permalink) Old 07-20-2008, 01:16 AM
Retired Staff
 
Mootsfox's Avatar
 
Join Date: Aug 2006
Location: Columbus, Ohio
Posts: 19,075
Rep: 693 (Unique: 434)
Any idea of improvement over onboard RAID setups?


Certs:
A+
ACMT
Dell ASP; Desktop, Laptop, PowerConnect, PowerEdge, PowerVault


Mootsfox is offline  
post #9 of 7288 (permalink) Old 07-20-2008, 07:25 AM - Thread Starter
Retired Staff
 
Join Date: Nov 2006
Location: NJ
Posts: 65,143
Rep: 4426 (Unique: 2045)
Quote:
Originally Posted by Mootsfox View Post
Any idea of improvement over onboard RAID setups?
With RAID5 or large arrays, performance gain is massive... I am waiting for my cables before I can start bench RAID0.

To answer most of your questions: (1) a fridge cannot cool a PC (2) 64-bit OS for over 3.4GB (3) If a PCIe card fits, it should work (4) Resolution, not screen size (5) Report, not respond to Spam (6) Single-Rail/Non-Modular PSUs are not always better than Multi-Rail/Modular (7) Sequential does not matter as much as random for OS drives (8) Requirements come before hardware for servers

DuckieHo is offline  
post #10 of 7288 (permalink) Old 07-20-2008, 01:17 PM
Retired Staff
 
Join Date: Mar 2008
Posts: 11,949
Rep: 726 (Unique: 481)
Dude this is awesome, other cards with an IOP333 are extremely expensive, ESPECIALLY the SAS ones!!!!

Any idea on the max amount of memory it supports? It would easily be worth spending the money for a 2gb stick and enabling write-back cache

Manyak is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off