Overclock.net banner
1 - 20 of 7293 Posts

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #1 ·
PowerEdge Expandable RAID Controller (PERC) 5/i

Manuals: http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/index.htm

Specs:

  • Intel IOP333 Processor (11w TDP, 110Tj(max))
  • 256MB 400MHz ECC Registered DDR2 memory (upgradable*)
  • RAID levels 0, 1, 5, 10, and 50
  • PCIe x8
  • 2 (SFF-8484) SAS internal connectors (support for 8 drives)
  • LSI Manufactured (and flashable)
  • Xp, Vista 32/64 Supported
  • Does not support Native Command Queuing
  • Does not support 3TB+ drives

* 400MHz ECC-registered DIMMs with x16 DRAM components. Installing unsupported memory causes the system to hang at POST. You have to buy x8 or x16 Memory Modules:
x8 = 9 Chips (1 ECC)
x16 = 5 Chips (1 ECC)

For the PERC 5/i, you can find the last drivers/firmware on Dell's website. Look under downloads for the "Poweredge 2900 ".

Latest firmware version: A09 5.2.2-0072 (as of September 5, 2012)



Awesome thing is that this $400-700 card can be found on eBay regularly for $100. Since they are so common, there are usually 2-10 on sale at any given time. If you are looking for RAID5 or improving your RAID performance, this is card to get. The next closest card is going to cost $300+ unless you get lucky. Since SAS is downwards compatible with SATA, you just need the correct cable. A SAS SFF-8484 to 4xSATA cable runs for $10. Make sure to get pass-through. Do not get crossover or backplane cables: http://www.nowdirect.com/exec/partInfo/part_detail.tsb?prcpart=ADP2247600-R&categoryid=

PowerEdge Expandable RAID Controller (PERC) 6/i

Manuals: http://support.dell.com/support/edocs/storage/RAID/PERC6/en/UG/HTML/index.htm

PERC 5/i vs 6/i Whitepaper: www.dell.com/Downloads/Global/Power/ps2q08-20080255-Dixit.pdf

Benchmarks: http://en.community.dell.com/techcenter/storage/w/wiki/perc6-with-md1000-and-md1120-performance-analysis-report.aspx

Specs:

  • LSI SAS1078 RAID on Chip (ROC) 500MHz
  • Operational Temperature: 50C
  • Onboard 256MB of ECC Registered 667MHz DDR2 3-5-5-5
  • RAID levels 0, 1, 5, 6, 10, 50, and 60
  • PCIe x8
  • 2 (SFF-8484) SAS internal connectors (support for 8 drives) *Does not support SATA 1.5Gb/s
  • LSI Manufactured
  • Xp, Vista 32/64 Supported
  • Supports Native Command Queuing
  • Does not support 3TB+ drives

For the PERC 6/i, you can find the last drivers/firmware on Dell's website. Look under downloads for the "PowerEdge R900".

Latest firmware version: A14 6.3.1-0003 (as of September 5, 2012)



SMBus Issue with Intel Chipsets

These cards are known to have some compatibility issues with Intel chipsets. However, they are known to work with NVIDIA motherboards fine. The issue stems from a System Management Bus (SMBus) conflicting with the motherboard's memory detection. SMBus is simple signal to provide the motherboad some basic device information and control. Symptoms of the conflict includes improperly reported RAM sizes and POST errors.

The trick is just to physically disable the SMBus signal. It is composed of just two pins B5 (SMCLK, SMBus clock) and B6 (SMDAT, SMBus data). These two pins need to be covered by tape or nail polish. On the top side of the card, they are the 5th and 6th PCIe pins from the left. You can see the pins covered as seen below:



Forced Airflow is Required

Intel thermal specifications: http://download.intel.com/design/iio/applnots/30663002.pdf

The Tj maximum temperature is 110C. However, do NOT run the IOP333 passively. The heatsink needs force airflow. Intel's thermal analysis used a heatsink of the same size but with more fins (hence better). However, they require a minimum of 200LFM with there heatsink. Over the surface the PERC stock heatsink, that is at least 4CFM. Do realize that if you used a 80mm fan, you would need a higher CFM rating of around 16CFM. This is assuming that the 80mm fan is next to the HS.

Bottom Line: Make sure to force air cool the PERC 5/i CPU. The card is designed for Dell servers with forced air.

How to Flash the Dell PERC 5/i with LSI MegaRAID SAS 8480E Firmware

Note: Since the card uses a Dell Bootloader, there is a possible corruption as defined in Post #1343 (link). It is recommended that you use the Dell firmware unless it does not work.

The LSI version of this card is kept more up to date with bug fixes. Also, the LSI version has more software features so might as well get the better model.

The Windows version is the easiest to use so I'll provide those steps:
1) Download and extract the LSI flashing utility "MegaCLI - Windows"
2) Download the and extract the 7.0.1-0083 firmware
3) Place the .ROM file in the MegaCLI folder
3) Open a command prompt window
4) Navigate to the MegaCLI directory
5) Run the command: MegaCli -adpfwflash -f [firmware name].rom -a0

Latest LSI Firmware: 7.0.1-0083: (as of September 5, 2012)
http://www.lsi.com/downloads/Public/Obsolete/Obsolete%20Common%20Files/7.0.1-0083_SAS_FW_Image_APP-1.12.330-1300.zip

MegaCLI - 5.3
MegaCLI v8.04.07:
(as of September 5, 2012)

http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.04.07_MegaCLI.zip

Latest Windows Driver: 5.2.124: (as of September 5, 2012)
http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/5.2.124_Signed_Windows7_Driver.zip

MegaRAID Storage Manager - Windows - 5.3
MegaRAID Storage Manager v12.05.03.00:
(as of October 23, 2012)
www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.08.03.03_Windows_MSM.zip

*The LSI firmware (7.0.1-0068) may cause the card to not be recognized on some systems. Reverting to 7.0.1-0051 or (supposedly) 7.0.1-0056 should resolve the issue.

Stuff for linux: http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks/6000#post_18128110

Adapter to Convert Loop-Mounts into Screw-Mounts

This IOP heatsink is mounted using via the spring-loop mount. This type of mount was used on older chipset and is not common today. In addition, the older chipsets used 3" loop spacing but this card uses 2.5" spacing. Therefore, it is almost impossible to find a low profile heatsink that will fit on this cards.

However, there is this $5 kit that will convert the loops into screws to increase chipset HSF compatibility: http://www.epowerhousepc.com/microcool-hook-adapter-northpole-p-101.html



How to Disable Battery Backup Unit (BBU) Warning

Note: This option is no longer available in the latest Firmware. To get maximum performance, you will need a Battery Backup Unit.

The card will display the following message if a battery unit is not used/working. These batteries power the card and the RAM in case of system power loss. It lets the card clear out the RAM to prevent data loss/corruption before shutting down the HD.

"The battery hardware is missing or malfunctioning, or the battery is unplugged. If you continue to boot the system, the battery-backed cache will not function. Please contact technical support for assistance. Press 'D' to disable this warning (if your controller does not have a battery)."

Just press D and the message should not appear again. If it does, update your firmware.

More Benchmarks of PERC 5/i

Courtesy of Clubhouse: http://www.overclock.net/6091504-post2266.html




Benchmarks of PERC 5/i vs ICH9R

Test System:

  • [email protected]
  • 4GB DDR2-1095
  • Vista 64 (Defrag and SuperFetch Disabled)
  • OS HDD: Seagate 7200.10
  • Test HDD: 3x WD Raptor 8MB WD740GD-00FLC0
  • HDTune using 64KB Sectors

Terminology:

Read Ahead: system tries to predict what part of files will be needed and preloads them into memory. This boost sequential performance but hurts random performance.

Adaptive Read Ahead: Read Ahead when memory and I/O avaliable with prioritization. Balances sequential and random read performance but takes processing power.

Write Through: Writes from memory to HDs are performed only when the data is complete. Data is not accessible until written to HD.

Write Back: Writes are stored in memory and can be accessed from faster RAM. Data is written to HD when optimal. Performance increases but data is at risk in volitle memory. If power is lost, the stored data in the RAM is lost and never written to disk.

Degraded: RAID5 can operate with one HD missing. Performance is heavily impacted due to parity calculations.

Generalized Results:
RAID5 write speeds went from 55MB/s to 121MB/s.
RAID5 read speeds remain about the same but much more consistent.

RAID0 write speeds went from 165MB/s to 180MB/s but less consistent.
RAID 0 read speed went from 162MB/s to 182MB/s and MUCH more consistent.

CPU usage was greatly reduced and access time generally only slightly improved.

Please remember this are artificial tests that focus more on sequential performance.

Single Raptor



RAID5

ICH9R RAID5 w/ Caching [64KB]



ICH9R RAID5 w/o Caching [64KB]


ICH9R RAID5 Degraded w/ Caching [64KB] *Occasional BSoD*


PERC RAID5 (Write-Back, Adaptive-Read-Ahead) [64KB]


PERC RAID5 (Write-Back, Read-Ahead) [64KB]


PERC RAID5 (Adaptive-Read-Ahead) [64KB]


PERC RAID5 (Write-Back) [64KB]


PERC RAID5 Degraded (Write-Back, Adaptive-Read-Ahead) [64KB]


RAID0

ICH9R RAID0 w/ Caching [128KB]


ICH9R RAID0 w/o Caching [128KB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [128KB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [1MB]


PERC RAID0 (Write-Back, Adaptive-Read-Ahead) [8KB]
The poor performance is probably due to the 64KB block sizes used in the test. The controller had to access 8 sectors to retrive each block.


PERC RAID0 [128KB]


JustusIV Benchmarks of PERC 5/i

4x 1TB Samsung Spinpoints F1

RAID0


RAID5 With No Read Ahead


RAID5 With Adaptive Read Ahead


RAID5 With Read Ahead


RAID5 With Write Through


RAID5 With Write Back


Extra Cooling on My Card

CPU Heatsink removed:


I used a copper RAMsink on the RAID processor and a Thermaltake Spirit HS for the EC-RAM (courtesy of s1rrah). The card does get quite hot and I am thinking about replacing the CPU's HS.


Stock heatsink routed and cut up pieces of copper from a soldering tip.


Prepping an old laptop heatpipe/heatsink (notice the copper bits underneath as gap fillers)


Arctic Silver Adhesive to hold it together.


Ghettofied.
 

·
Registered
Joined
·
2,314 Posts
That is a great deal!

I have used that card in a couple of customer builds and it is the same logic built into the high end Dell Workstations.

FYI: In the supermicro mobo's (and possibly others), when doing a large data transfer (500gb+) across multiple drives, that card will get errors unless it is plugged into an x16 slot. (same with the LSI cards) Still trying to find out why from Dell and LSI...
 
  • Rep+
Reactions: DuckieHo

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #3 ·
Quote:

Originally Posted by airbozo View Post
That is a great deal!

I have used that card in a couple of customer builds and it is the same logic built into the high end Dell Workstations.

FYI: In the supermicro mobo's (and possibly others), when doing a large data transfer (500gb+) across multiple drives, that card will get errors unless it is plugged into an x16 slot. (same with the LSI cards) Still trying to find out why from Dell and LSI...
Do these run hot? I know Intel's new dual-core IOP (found on Adeptec's cards) run VERY hot at 80C+. Aftermarket cooling is recommended on those cards since they are known to thermal shutdown. I've seen someone watercool his RAID controller b/c of this... seriously.
 
  • Rep+
Reactions: mortimersnerd

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #4 ·
got my card! Updated pics.

Benchies coming after I get my cables.
 
  • Rep+
Reactions: kennymester

·
Registered
Joined
·
2,463 Posts
This looks pretty sweet. I think I'm going to get one for my Home Server. The guy on Ebay is about 2 miles from where I live. Thanks Duckie!


Edit: Do you think you take some shots of the ports and such?
 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #6 ·
Quote:

Originally Posted by kennymester View Post
This looks pretty sweet. I think I'm going to get one for my Home Server. The guy on Ebay is about 2 miles from where I live. Thanks Duckie!


Edit: Do you think you take some shots of the ports and such?

In most cases, you don't need one of these for a home server. These would be over kill in most cases.

The card has two SFF-8484 ports (aka 4xSAS). You can use these cables to support 4 SATA device per port: http://www.cross-mark.com/50cm-seria...ble-p-842.html

 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #7 ·
Updated directions to flash it to a LSI MegaRAID SAS 8480E.
 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #9 ·
Quote:

Originally Posted by Mootsfox View Post
Any idea of improvement over onboard RAID setups?
With RAID5 or large arrays, performance gain is massive... I am waiting for my cables before I can start bench RAID0.
 

·
Premium Member
Joined
·
11,951 Posts
Dude this is awesome, other cards with an IOP333 are extremely expensive, ESPECIALLY the SAS ones!!!!

Any idea on the max amount of memory it supports? It would easily be worth spending the money for a 2gb stick and enabling write-back cache
 

·
Registered
Joined
·
2,463 Posts
@Duckie

The reason I would want it for my home server is because I'm running three 1Tb drives in a Raid 5 off the intel chipset. As you can probably guess its slow as hell.
 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #12 ·
Quote:

Originally Posted by Manyak View Post
Dude this is awesome, other cards with an IOP333 are extremely expensive, ESPECIALLY the SAS ones!!!!

Any idea on the max amount of memory it supports? It would easily be worth spending the money for a 2gb stick and enabling write-back cache

The maximum support memory is 512MB.

For benching, I will be using 3 74GB Raptors. I will compare the PERC 5/i against a SB700. I would do it against a ICH9R but I don't have a spare drive to install an OS. I plan to use the same OS and bench the arrays in RAID0 and RAID5.

BTW, the CPU is ridiculously hot. I need to figure out how to get a better HS on it.
 

·
Premium Member
Joined
·
6,588 Posts
Quote:

Originally Posted by DuckieHo View Post
The maximum support memory is 512MB.

For benching, I will be using 3 74GB Raptors. I will compare the PERC 5/i against a SB700. I would do it against a ICH9R but I don't have a spare drive to install an OS. I plan to use the same OS and bench the arrays in RAID0 and RAID5.

BTW, the CPU is ridiculously hot. I need to figure out how to get a better HS on it.
Awesome Duckie, can't wait for results. Great guide too!

Have you tried active cooling such as a small 80mm fan on the heatsink? I'm always amazed at how well tiny 80mm fans work.
 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #14 ·
Quote:

Originally Posted by TheLegend View Post
Awesome Duckie, can't wait for results. Great guide too!

Have you tried active cooling such as a small 80mm fan on the heatsink? I'm always amazed at how well tiny 80mm fans work.


The card is actually 2in below a 120mm fan so it is getting plenty of airflow. Even with that, I can't touch it for more than 2s and this is idle! I don't understand why they use such crappy HS on these. Plus is uses the clip mounting method, so it's gonna be hard to find an aftermarket HS to fit. Gotta look for old Intel chipset heatsinks.

If you look at the 8480e, you see much larger HS:
 

·
Premium Member
Joined
·
4,687 Posts
I want to get one of these, but since I'm only RAID0ing two $50 HDDs I don't think it would be a smart move (especially since I need a new GPU).

A RDD would probably be a better option for me though.
 

·
Premium Member
Joined
·
11,951 Posts
Quote:

Originally Posted by DuckieHo View Post
The maximum support memory is 512MB.

For benching, I will be using 3 74GB Raptors. I will compare the PERC 5/i against a SB700. I would do it against a ICH9R but I don't have a spare drive to install an OS. I plan to use the same OS and bench the arrays in RAID0 and RAID5.

BTW, the CPU is ridiculously hot. I need to figure out how to get a better HS on it.

Damn. Well I ordered one anyway
I'll post the difference with the ICH9R whenever I get it. It won't be much though, my HDs don't really tax it.

Can't wait for the cash to get some 15k SAS drives
 

·
Premium Member
Joined
·
65,162 Posts
Discussion Starter · #17 ·
Quote:

Originally Posted by Manyak View Post
Damn. Well I ordered one anyway
I'll post the difference with the ICH9R whenever I get it. It won't be much though, my HDs don't really tax it.

Can't wait for the cash to get some 15k SAS drives

I am pretty sure the card will not POST on your motherboard naively due to a SMBus conflict. Make sure to cover Pin 5+6 on the PCIe slot to bypass this issue.

Also, if you bought it from eBay.... make sure to use the 20% cashback through Microsoft Live Search.
 

·
Premium Member
Joined
·
11,951 Posts
Quote:

Originally Posted by DuckieHo View Post
I am pretty sure the card will not POST on your motherboard naively due to a SMBus conflict. Make sure to cover Pin 5+6 on the PCIe slot to bypass this issue.

Also, if you bought it from eBay.... make sure to use the 20% cashback through Microsoft Live Search.
Yup, figured I'd have to do that. Not like its a big deal though.

And I almost forgot about the cash back
 

·
Registered
Joined
·
443 Posts
1 - 20 of 7293 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top