Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 432

post #4311 of 7188
Quote:
Originally Posted by DJZeratul View Post
Are you sure that its still a RAID5? It sounds to me like it just converted to a RAID0. 3.79 TB sounds correct for a 4x1TB disk RAID0. The benchmarks also seem to be on par for 4x1TB Cav. Blacks in RAID0
You're right, it is a R0 now! How did that happen? It was a R5 before. I thought if you changed it's RAID type, it would destroy all the data on the VD.
Edited by ShadowFox19 - 5/3/10 at 4:30pm
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
post #4312 of 7188
Quote:
Originally Posted by ShadowFox19 View Post
You're right, it is a R0 now! How did that happen? It was a R5 before. I thought if you changed it's RAID type, it would destroy all the data on the VD.
Easiest way to check is either in the PERC BIOS or using the MegaRaid GUI tool.

And yeah, the "migration wizard" can convert a RAID5 to a RAID0 without losing data. I am not sure if you can go the other way, though, without losing data.

It sounds like when you did the migration wizard, it might have pre-selected RAID0 as the conversion, and you would have had to explicitly set RAID5 as the destination model.

After further research, the only way to go back to RAID5 for you now without backing up the data and recreating the virtual disk is to add another disk to the RAID0 you just created and make sure this time your destination virtual disk is configured as RAID5.
Edited by DJZeratul - 5/3/10 at 4:36pm
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
post #4313 of 7188
Quote:
Originally Posted by DJZeratul View Post
Easiest way to check is either in the PERC BIOS or using the MegaRaid GUI tool.

And yeah, the "migration wizard" can convert a RAID5 to a RAID0 without losing data. I am not sure if you can go the other way, though, without losing data.

It sounds like when you did the migration wizard, it might have pr-e-selected RAID0 as the conversion, and you would have had to explicitly set RAID5 as the destination model.
I'll just keep it the way it is for now. I've got all the data on the array mirrored onto my 2TB Green drive anyway. I'll save the conversion project for another day. Plus, I've got Carbonite and if I had to redo everything, I'd have to re-upload about 190GB of data and that would take forever...it's fine, LOL!
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
    
CPUGraphicsRAMHard Drive
Intel Core i7 2720QM NVIDIA Quadro 2000M 16GB DDR3 1333 80GB mSATA SSD (OS) / 500GB Samsung 
Optical DriveOSKeyboardCase
DVD-R Windows 7 64-bit Professional Das Keyboard Model S Professional Lenovo W520 
MouseMouse Pad
Roccat Kone[+] Roccat Sota 
  hide details  
Reply
post #4314 of 7188
Hi Everyone

Thought I'd make it my first post to ask for some help and say a big thank you to you all for this thread, its helped me enormously in my research and implementation of a new RAID5 set on my Rig.

The Rig is as follows:

Dell Perc 5/i Card
- 256MB Cache RAM
- with BBU and cable, installed.
- Installed on Maximus Formula (X38) in second 16x PCI-Express slot
- Pinmod done, system wouldn't even Post without it
- Firmware flashed to LSI 0056 firmware revision.
- Drivers installed are Dell drivers from Microsoft under Windows 7.
- MegaRaid software used to monitor/configure the RAID setup.

4x Samsung F3 500GB Hard Drives, current config:
- Full volume RAID 5 with single partition
- 128K Stripe
- Adaptive Read Ahead enabled (Always Read ahead option testeD)
- Write Back enabled (Write through is a big no-no! )
- Direct IO enabled (Cached IO also tested)
- Windows Write Buffer flushing disabled

Benchmark tool: HD Tune Pro Trial
Benchmark testing with 256KB Block selected in options

Everything is working, Windows7 is installed on the RAID5 array sucessfully (seems like a marathon to get here!) and its working fine.

However, I'm just not seeing the RAID5 Read speeds I'd expect from the above setup with the benchmark optiosn selected.

At first, I tested a few different disks in different RAID0 configurations - with 4 Disks, I was seeing staggering high benchmark results (almost 500MB/s average Read Speed! ).

So I thought that was all good, and went ahead with a RAID5 setup. However, I was only getting between 120-140MB/s read speed average at first. I did narrow this down to the fact that the Fast Initialisation wasn't much use, as it had to perform background initialisation after the fact.

So I dropped down to the Card BIOS, recreated the RAID-5 Array and then set off the full initialisation cycle. Left it about 90 minutes, completed, and re-tested under windows showed an improvement to around 210-220 MB/s read speed. Burst speeds are phenominal however, hitting 450MB/s!.

However, I was expecting to see around a Read speed of 300MB/s, given the fact that one disk (effectively) is being used for parity in RAID5.

Write speeds are absolutely fine. In fact, better than fine - using the same bench mark process, I was seeing around 340MB/s average on writes!

So can anyone think of any other improvements or suggestions on fixing the Read speed for this Benchmark. if the numbers had been the opposite way, I would of been happy, but I want as much Read speed as this setup is capable of giving!!
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
post #4315 of 7188
Quote:
Originally Posted by Arthalen View Post
So I dropped down to the Card BIOS, recreated the RAID-5 Array and then set off the full initialisation cycle. Left it about 90 minutes, completed, and re-tested under windows showed an improvement to around 210-220 MB/s read speed. Burst speeds are phenominal however, hitting 450MB/s!.

However, I was expecting to see around a Read speed of 300MB/s, given the fact that one disk (effectively) is being used for parity in RAID5.

Write speeds are absolutely fine. In fact, better than fine - using the same bench mark process, I was seeing around 340MB/s average on writes!

So can anyone think of any other improvements or suggestions on fixing the Read speed for this Benchmark. if the numbers had been the opposite way, I would of been happy, but I want as much Read speed as this setup is capable of giving!!
Post the pics of your benchmarks - the numbers (especially just the averages) can often mask the issues. Your settings sound fine though.

Don't take much notice of the burst speeds - they are skewed by the onboard cache, so in practice don't mean much.

Run your bench with a larger block size - 256k can only really read from 2 disks if you are using a 128k stripe. Use 1MB to properly bench your disks.

One thing to check though - is your battery showing as connected and charged? (I believe your issues are related to your test settings, but it never hurts to ask... ).
post #4316 of 7188
Thanks for the suggestions. I've destroyed the RAID5 for now, and gone back to my old 320GB WD Cav Blue drive with Vista on it. This way I can experiment with the RAID setup and bench it 'cleanly' without the OS getting in the way.

Also, and you suggestion seems to echo these thoughts, I think ATTO will give a much better indication of the speeds I can get at different block sizes - the initial 256KB block size sample was so I could compare results to benchmarks I've seen in this thread.

I'm going to give RAID !0 a whirl as well, although I don't expect to see better than 2 Disk RAID0 speeds, as it will be striping over 2 Virtual Mirrors, whereas the RAID5 gives me striping over 3 disks (effectively, ignoring parity).

Will post back with results and screenies.

Edit: Yep, the Battery is showing as connected and charged!
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
post #4317 of 7188
probably not a good idea to install your OS on a Raid 5.
post #4318 of 7188
Heres the results for RAID 0, 5 and 10...

RAID 0



RAID 5



RAID 10



Test Settings used across all tests -

MegaRAID Virtual Disk Options:
Stripe Size: 128KB
Read: Adaptive Read Ahead
Write: Write Back
IO Policy: Direct IO
Access Policy: Read Write
Disk Cache Policy: Enabled
Init State: Fast Initialization

Windows Disk Management Options:
Disk Initialised - MBR Partitions
Set as Basic Disk
Simple volume spanning whole disk
Quick Format: NTFS, Default Allocation Unit size

For RAID 5 and 10, a full Disk initialisation was performed prior to testing. All tests were done atleast 5 times to ensure they were consistent with each other.
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
Lightblade
(13 items)
 
  
CPUMotherboardGraphicsRAM
Q6600 Asus Maximus Formula - Thermalright Cooling ATI Radeon HD 5850 1GB 4GB Corsair Dominator Dual Channel 
Hard DriveOptical DriveOSMonitor
Dell PERC5/i and 4x Samsung F3 500GB LG 22x DVD-RW Windows 7 Ultimate Acer 19" WS 1440x900 
KeyboardPowerCaseMouse
Dell Mechanical Keyboard Corsair 750TX Lian li A70B Logitech MX Revolution 
  hide details  
Reply
post #4319 of 7188
Quote:
Originally Posted by pookguy88 View Post
I'm having problems flashing my Perc 5/i, I've been trying to flash it to the latest version (5.2.2-0072) but nothing seems to be working.

First I tried the windows exe but it wouldn't work because it didn't like my operating system (Windows Server 2008 x64). Then I tried booting with the floppy, still no go, it says 'STOP FLASH failed.' Finally I just tried formatting my USB thumbdrive as a boot drive and putting the update on there (one of the methods on the Dell page) and it had the same message as the floppy method... Does anyone have any other tips?? I'm all out of ideas.. thanks!
I will hazard a guess that you got this card with the pci bracket already installed, as I did. I had this same problem, could flash any LSI firmware for 8480E but none of the Dell ones.

Found out that Dell workstations can also come with Perc 5i cards. So went to Dell site, selected Precision Workstation 690, and found firmware 5.2.2-0072, A06. The 5.2.2-0072, A09 listed on page 1 of this thread is for the PowerEdge Server version. I used the megaraid manager to update the firmware using the FW4615IA.rom file.

I don't know if the workstation version of the Perc 5i is different than the server version as I don't have one to compare.

Someone else with a Perc 6i also has this problem. I can't confirm if this solution is also applicable but it is worth a try.
post #4320 of 7188
Hey guys,

First up thanks to the OP for creating this thread and all the information within been a great help.

I however am after any advice to an issue i'm having with my Dell Perc 5/i.

The problem i'm having is my raid 10 array, consisting of 4x 320GB SATA disks is constantly failing. It seems disk 00 is constantly coming up as either forign or requiring a rebuild. This has been happening basically non stop since i created the array and i'm at a loss as to diagnosing it.

Wondering if anyones got any advice as to why this might be happening, it seems this one disk is either at times detected as forign, or comes up requiring a rebuild.

My specs are as follows:

Intel Q6600
Asus P5K-E Wifi - latest bios
8GBs of memory 4x 2GB dimms
2 x 320GB Seagate model ST3320613AS with latest firmware
2 x 320GB Seagate ST3320418AS with latest firmware
Nvidia 8400GS PCIe in second PCI-E port
Dell Perc 5/i running LSI firmware 7.0.1.74
Some Pioneer DVD ROM drive connected to the onboard Intel ICH9 controller

Problem:

i've created a raid 10 array comprising of the above disks, initialised and run consinstancy checks prior to installing ESX on the box. The array passes fine and from there i proceeded to install ESX and a bunch of Windows Server and Solairs VMS. Performance when benchmarking was brilliant, up to 300 Megabytes a second reads and well over 200 megabytes writes from 8kb files all the way up to 2GB, so all seemed good. This was running for a good 24 hours without fault, but second i restarted bam! Disk 00 was marked as forign and i needed to reimport it and rebuild.

I thought that was a one off however i reboot again, bam! This time however the raid is simply marked as degraded, and the Perc 5 at least recognises it and automatically rebuilds it.

Then the 3rd time, ESX comes up with a warning that disk 00 is again degraded and requiring a rebuild.

So i guess this is where i'm confused. It has worked for some hours without fault, benchmarked and performed brilliantly for 24 hours, but disk 00 seems to be constantly dying.

What's also got me is the fact that when i run the inbuilt Perc consistancy check the raid array passes, when i create the array it worked, it just seems over time it fails. If i leave the server up for periods longer then 4 hours disk 00 will fail, and if i reboot 9/10 times it the Perc will list the raid as array degraded and disk 00 as either forign or degraded.

First guess is that perhaps its failing due to me running two different types of disk with different firmware revisions. As such i created just a raid 1 array with two of the same matched drives, bang same problem disk 00 failed again. So this ruled out different drives being the cause.

I have taken this disk out however and hooked it up to my desktop, there's no smart errors on the disk and it's performance was fine. I ran a 24 burn in test on the disk no faults, did a full zero worked fine, and a block check and again passed. The disk appears to be in perfect working order and not faulty. I have also thrown the second disk onto port 00, and yep same problem.

So i guess all i can think of is there's some compatability issues with my raid array using 2 different model disks and that's bringing down the array. Either that or there's something funky going on with the motherboard as i know these controllers dont get on well with Intel based chipsets. But what gets me is performance is perfect when it is working, and it's what ever disk is in port 00 that fails.

Perhaps a different firmware would be advisable? Anyone running a similar setup and able to advise on a good firmware?

Edit:

On further googling it seems ESX 4.0's initial drivers for LSI/Dell PERC controllers are a bit dodgy, and they're known to break raid arrays if you're using miss matched disks or firmwares like i am. I've now patched my ESX host's LSI raid drivers with the latest and hope this fixes it. I'll let you know what happens just incase anyone else out there is running ESX and has similar issues.

This makes sense as i just did some more testing, i could not break my raid 10 array if i just installed Server 2008 straight onto it, has run fine for past 4 hours and 20 reboots without a single degraded or forign disk failure. Seems ESX might just be the cause of this.

Thanks
Edited by matthew87 - 5/9/10 at 7:21am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks