Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 422

post #4211 of 7188
Quote:
Originally Posted by BooDaddy View Post
BLinux,
thanks for the link! That would have taken me a while to get to those posts
I installed iozone from the repos, and have read the man for it. I cannot find how to run the tests against a particular disc. Currently I have my raid array on /dev/sdb1 and my OS disk on /dev/sdb6. I would like to run the tests against /dev/sdb1 but I do not see a switch to use for that.

Also, have you have any luck with the MegaRAID tools? Ideally, I would like to have a service or daemon run on my ubuntu server install and connect to the MegaRAID with client installed on my laptop. Im figuring the MegaRAID CLI will do this, but I cant get the package to work. Its an .rpm and I need a .deb. I have tried using alien, but am not having any luck.
iozone does it's benchmarking in the current working directory. i usually create a subdir, cd <subdir>, run iozone.

yes, i use both the MegaRAID CLI and Dell "om" tools... usually the Dell tools since i have the Dell firmware. i have had no problems running either, but i'm on RHEL/CentOS so RPMs are easy. But, for the MegaRAID CLI, i've just used the tarball in the past and that worked fine too. i'm not much of a debian user so I can't help you there specifically. what type of problems are you running into?
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #4212 of 7188
Well, when I use alien to convert the rpm and then untar it, I get the dirs it needs.
I then moved the MegaCLI64 over to the /usr/bin/MegaCLI directory and ran MegaCLI --AdpCount and I get 0 controllers found.

I get a plethora of errors when I try to convert the storage Manager over to a deb and install it. Is there a command or switch that I can use in MegaCLI to invoke a process that will advertise the raid card over a port (much like the storage manager does) so that I can connect to it using another PC?
post #4213 of 7188
Quote:
Originally Posted by BooDaddy View Post
Well, when I use alien to convert the rpm and then untar it, I get the dirs it needs.
I then moved the MegaCLI64 over to the /usr/bin/MegaCLI directory and ran MegaCLI --AdpCount and I get 0 controllers found.

I get a plethora of errors when I try to convert the storage Manager over to a deb and install it. Is there a command or switch that I can use in MegaCLI to invoke a process that will advertise the raid card over a port (much like the storage manager does) so that I can connect to it using another PC?
i can't answer your question about the remote interface, i just don't use it.

i don't know about converting RPMs to Debs. have you tried just using the tarball directly? i've never had issues using the MegaCLI utility.

or, you could just switch over to an RPM based system... ;-P
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #4214 of 7188
BLinux,
Thanks for trying at least.
I have tried using the tar ball directly, but I cannot get it to recognize my card. it will always return 0 controllers found.
I'm actually a big fan of CentOS, but Im just a slightly bigger fan of Ubuntu Server (esp Debian)
post #4215 of 7188
hi all, I've got a bit of a problem. I just installed the card (5/i) but my OS is Windows Server 2008 x64. I'm trying to upgrade the firmware but the Dell package doesn't seem to like my OS.. is there anyway to do this? Also, what program/utility should I be using to manage the card with my particular OS?
post #4216 of 7188
I'm just going to summarize some info I've gathered trying to figure out if CCTL (aka TLER/ERC) can be changed on desktop drives, in perticular the Samsung F3 2TB disk.

First the good news: The Samsung F3 2TB disk (HD203WI) does support changing the CCTL settings which will survive a warm reboot. This means that rebooting is OK, but as soon as you shut down or cut the power, the settings will revert to default, which is disabled state. A workaround is to set the CCTL at every cold boot (i.e. via a Linux Live CD).

Source: http://forums.storagereview.com/inde...333-tler-cctl/

Now the bad news: The PERC doesn't seem to support the smartmontools ERC commands that have have been made available here.

Even though smartmontools support the PERC (or LSI MegaRaid) card under Linux, the ERC command doesn't seem to be supported. I have been in contact with the author of the ERC enabled version of smartmontools who has tested this himself and have explained that each RAID card is accessed differently and that the ERC command has to be customized for each card/driver. The author mentioned that there had been a chance of other RAID cards (including the PERC) to be supported once implemented in the official smartmontools. That doesn't seem to have happened as the newest version 5.40 which contains the ERC command doesn't seem to work for the PERC.

There is also another tool called HDAT2 which uses MSDOS as a platform to change ERC settings. I have had no luck with this tool either.

I'm going away for a few weeks and won't be able to investigate further or contacting devs etc, but I thought I'd share my experiences with you guys here hoping that someone with the need/urge to get CCTL/TLER/ERC support on desktop drives on the PERC might pick up where I've left off. Wouldn't it be great to not having to worry about dropping drives in an array?
post #4217 of 7188
Not sure if anyone has run into this issue with the Dell PERC 6/i SAS RAID Controllers purchased after ~Jan 2010, but here goes just incase;

YES!!! Masking Pins B5 and B6 on the newest versions (after ~Jan 2010) of the Dell PERC 6/i SAS Disk Controller will Mask off the SMBus of the PERC and prevent if from conflicting with Onboard SMBus (type) functionality, effectively allowing the System to see all RAM that has been installed on the System Board (i.e. 8 GB installed, but only 4 GB seen when PERC 6/i is installed; after masking Pins B5 & B6 with Fingernail Polish {thank you Babe!} on PERC 6/i {and always on PERC 5/i} will allow System Morther Boards to see the full amount of RAM you have installed). Hope that's clearer than mud...

Now, just so everyone knows. I had two older model PERC 6/i SAS RAID Controllers (one Used; one new) and both of those always worked without Masking any Pins. But the new PERC 6/i Controllers that I've purchased after Jan 2010 require Masking B5 & B6 Pins on the Cards (just like we've always had to do with PERC 5/i Controllers). So, Dell has obviously started connecting these Pinouts to the RoC Controller, but hadn't before...

I thrashed with this for a while, till I got to the point of, "what the heck, it ain't working anyway and if I'm carefull I think I can remove the Nailpolish from the Pinouts if I have to..." Before this, I tried updating my System Board BIOS (both forward and backward versions). I tried updating the Firmware on the PERC Controllers (overwrite latest version, and also tried LSI's FW version, which also works on PERC 6/i by the way). I tried disabling every extra MB Component (i.e. USB Ctrlr, FireWire, Extra Disk Controllers, Sound, etc, etc...) and removed every other component from the System (i.e. only the PERC 6/i and Video installed). I tried every combination of Slots for Video and PERC, and I even tried a PCI Video Card that I had laying around. All to no avail... So, the only thing that finally did work was Masking Pins B5 & B6.


I have;

ASUS P7P55D EVO MBs
Intel i5 750 Quadcore CPUs
MSI (sadly enough, but works for home lab Servers) 8800 GT Video Crds
Dell PERC 6/i SAS RAID Cntrlrs
6 WD 1 TB FALS SATA HDDs (two mirrored for OS, four RAID 5 for Data)
blah, blah, blah...

Anyway... All, please enjoy your new PERCs!!! =)
post #4218 of 7188
Quote:
Originally Posted by mmouse69 View Post
Not sure if anyone has run into this issue with the Dell PERC 6/i SAS RAID Controllers purchased after ~Jan 2010, but here goes just incase;
...
What motherboard were you using, and did the same board not require any pin modding on your earlier PERC?

I have a couple of PERC 6/i cards, all older than Jan of this year (not sure of the dates, but some I have owned for ~12 months). All require the SMBus pins to be masked when used in an Intel S3000AH server board.
post #4219 of 7188
Hey all! This is my first post on the board.

I'm pretty much a RAID newb and I'm a little embarrassed about it. For whatever reason, I've just never had a need for drive redundancy on the servers I've managed and usually haven't had the budget to upgrade.

Regardless, when I started at my current company, I inherited a Dell Poweredge 840 running a PERC 5/i card with 4x73GB Seagate SAS 15000RPM drives in Raid 5. I was able to get in to the RAID BIOS to see how it's set up as the previous tech didn't install any software array managers. It looks like there is just 1 VD which contains everything - system and file storage.

We've reached a point where it's time to expand the storage. It seems like the simplest thing to do is just pick up 4 more identical Seagate drives (these drives have been discontinued so I should stock up now while I can) and then just expand the array.

My questions are as follows:

1. What do best practices dictate? Should I recreate the array with a VD for system files and a VD for storage? We don't suffer from any noticeable bottlenecks or general sluggishness.
2. How much brain damage will I endure if i upgrade to newer larger capacity drives?
3. I'm out of phyisical drive room inside the server box...can anyone give me some recommendations for an external enclosure for the extra drives I'll be adding? I think I would have to run a new SAS cable from the external enclosure into the second port on my PERC card.

Thoughts?

Thanks in advance!
post #4220 of 7188
Quote:
Originally Posted by Frrrrrrunkis View Post
Regardless, when I started at my current company, I inherited a Dell Poweredge 840 running a PERC 5/i card with 4x73GB Seagate SAS 15000RPM drives in Raid 5. I was able to get in to the RAID BIOS to see how it's set up as the previous tech didn't install any software array managers. It looks like there is just 1 VD which contains everything - system and file storage.

We've reached a point where it's time to expand the storage. It seems like the simplest thing to do is just pick up 4 more identical Seagate drives (these drives have been discontinued so I should stock up now while I can) and then just expand the array.
Because you have 4 drives and you want to move to 4 drives - I think you can add new 4 drives to "second" channel, create new VD and do migration ... Acronis True Image should handle this perfectly
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks