Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks
New Posts  All Forums:Forum Nav:

PERC 5/i RAID Card: Tips and Benchmarks - Page 515

post #5141 of 7150
Quote:
Originally Posted by Tanglefoot69 View Post
Can anyone tell me which p67 based motherboards are compatible with the perc 5i? I have searched all through this thread and cant find the answer. The boards I am considering getting are the Asrock P67 Extreme6 and the ASUS P8P67 DELUXE.

Also, can the perc 5i be used in the PCI 2.0 x16 slots of the p67 MBs?

Thanks
Just finished setting my perc 6/I on a Asus p67 without the pin mod. Works just fine.

Sent from my Droid using Tapatalk
post #5142 of 7150
Guys, I'm now ****ting bricks.

I figured out the problematic drive last night, by shutting down my server, and pulling the drives one by one to another server to check for errors. I found the bad disc, and that was that.

I put it back into place in the server, added the new drive to my 8th port, booted into the Perc's BIOS, set up the new drive as a global hot spare, then set offline the bad drive. I **** down, pulled the bad drive, booted up, then left it in the BIOS to rebuilt overnight.

Come this morning, it's rebuilt, I boot into Windows, and it doesn't detect the array. To it, it's just a massive empty filesystem.

What do I do, what have I ****ed up?

I've been googling for an hour now, and nothing so far has been helpful.
post #5143 of 7150
Quote:
Originally Posted by FatBoyNotSoSlim View Post
Guys, I'm now ****ting bricks.

I figured out the problematic drive last night, by shutting down my server, and pulling the drives one by one to another server to check for errors. I found the bad disc, and that was that.

I put it back into place in the server, added the new drive to my 8th port, booted into the Perc's BIOS, set up the new drive as a global hot spare, then set offline the bad drive. I **** down, pulled the bad drive, booted up, then left it in the BIOS to rebuilt overnight.

Come this morning, it's rebuilt, I boot into Windows, and it doesn't detect the array. To it, it's just a massive empty filesystem.

What do I do, what have I ****ed up?

I've been googling for an hour now, and nothing so far has been helpful.
sorry to hear your troubles. i hope you have a backup of the data? it is situations like this where i hope others will learn that RAID redundancy is not a substitute for data backups.

i don't know much about Windows; i'm mostly a Linux/Unix guy.... but if there is a Windows tool that can detect file systems and/or repair/restore the file system structure, i would start there to see if the file system is still there except a header got corrupted or something like that. Windows people always say to reboot as if it cures everything... so i guess you might try rebooting a few times to see if it finally detects the NTFS file system?

Either way, good luck.... i've had to do data recovery on broken RAID systems many times, just not in Windows world.... it's never fun..
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #5144 of 7150
Quote:
Originally Posted by FatBoyNotSoSlim View Post
Guys, I'm now ****ting bricks.

I figured out the problematic drive last night, by shutting down my server, and pulling the drives one by one to another server to check for errors. I found the bad disc, and that was that.

I put it back into place in the server, added the new drive to my 8th port, booted into the Perc's BIOS, set up the new drive as a global hot spare, then set offline the bad drive. I **** down, pulled the bad drive, booted up, then left it in the BIOS to rebuilt overnight.

Come this morning, it's rebuilt, I boot into Windows, and it doesn't detect the array. To it, it's just a massive empty filesystem.

What do I do, what have I ****ed up?

I've been googling for an hour now, and nothing so far has been helpful.
Are you sure you actually pulled the drive that the PERC thought was faulty? All you should have needed to do was replace the drive the PERC kicked from the array, and the rebuild should have triggered automatically - there should have been no requirement to mess with the BIOS.

You may have (effectively) rebuilt a blank array over your old data because the card thought that 2 drives had failed...
post #5145 of 7150
Is anyone here using a RAID card with ESXi? I am wondering how you are getting around the 2tb volume limit.

Also, how do virtual drives work? If I have a RAID5 array can I rearrange the virtual volumes without losing data?
    
CPUMotherboardGraphicsRAM
Intel Core i7 5820K EVGA X99 Micro2 EVGA GTX 980 32GB DDR4-2400 
Hard DriveOSMonitorPower
Samsung 850 Pro Windows 10 x64 Pro Qnix 1440p EVGA 850W Gold 
Case
CaseLabs Mercury S5 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Intel Core i7 5820K EVGA X99 Micro2 EVGA GTX 980 32GB DDR4-2400 
Hard DriveOSMonitorPower
Samsung 850 Pro Windows 10 x64 Pro Qnix 1440p EVGA 850W Gold 
Case
CaseLabs Mercury S5 
  hide details  
Reply
post #5146 of 7150
hey guys got a question. I gotten new 4 x 2tb hitachi drives and set them in raid5. Currently the array properties shows "Current Write Policy: Write Through" "Default Write Policy: Write Back"
Im getting 68MB/s when transfering large files from array0 to array1. Is this normal? Or do i need to do something to change the Current Write Policy to Write Back?
TIA

post #5147 of 7150
Quote:
Originally Posted by sidewinder69 View Post
hey guys got a question. I gotten new 4 x 2tb hitachi drives and set them in raid5. Currently the array properties shows "Current Write Policy: Write Through" "Default Write Policy: Write Back"
Im getting 68MB/s when transfering large files from array0 to array1. Is this normal? Or do i need to do something to change the Current Write Policy to Write Back?
TIA

if your write policy is write-through, yet the default should be write-back, then it means the controller has detected a condition where it thinks it is not safe to use write-back and has switched write cache policy. sometimes this means an undercharged or faulty battery.

the performance levels you mention seem a bit slow, but i don't know what you should be expecting. typically, you take the performance numbers of a single drive, call it P, multiply that by N-1 drives, in your case (4-1) x P = 3 x P. That number is going to be a rough estimate of your expected performance level for RAID-5 with 3~8 drives. Beyond 8 drives, the performance characteristics start looking a bit logarithmic. This estimate is only meaningful for large sequential writes where the overhead factor is close to 1.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
post #5148 of 7150
Quote:
Originally Posted by FatBoyNotSoSlim View Post
I shut down, pulled the bad drive, booted up, then left it in the BIOS to rebuilt overnight.

Come this morning, it's rebuilt, I boot into Windows, and it doesn't detect the array. To it, it's just a massive empty filesystem.
So, you made a hot spare, removed the problematic drive, and the array rebuilt itself via BIOS. From what you are saying, nothing should have happened to your data.

I am betting something else happened here though. Is there any info you can remember that happened? Did you rebuild the array, or reconfigure it any way? Did it automatically rebuild or did you have to input any settings in the BIOS?

If you had NTFS filesystem before all of this happened, and now you have a RAW disk... It sounds like you might have initialized the array by mistake.

You could run a chkdsk /r on whatever drive letter shows up to see if the filesystem just got corrupted... but other than that, if the disk is not initialized in windows, it looks like you are out of luck. RAID recovery is not impossible, but it is less likely to work. Especially if you previously had a degraded array.
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Core i7 970 ASUS P6T Deluxe eVGA GTX 680 6x4GB Kingston Hyper X DDR3-1600 
Hard DriveOptical DriveOSMonitor
Boot R0: 2x256GB Corsair SSD; Scratch R0: 3x750GB Samsung 22x LightScribe DVD+/-DL [SATA] Windows 7 x64 Ultimate Dell 2407FPWx2 
KeyboardPowerCaseMouse
Microsoft Sidewinder X6 Antec TruPower Q 1KW Custom Fabricated 18" cube (Mountain Mods UFO) Logitech G9 Laser 
Mouse Pad
Faux Woodgrain (My Desk) 
  hide details  
Reply
post #5149 of 7150
I am new to RAID and was wondering if someone could recommend settings for my PERC 6i controller for a RAID 6 array initially consisting of 4x 2TB WD20EADS drives? The array will be used for storage of 90% HD videos and 10% SD video and music. I do plan to expand the array in the near future to the full 8 drives as supported by the 6i.

One other question.
If I have a drive drop out of the array and I rebuild the array and encounter a read error (URE, I think is the term), do I have the option to "ignore or skip" the read error (losing the file or data) and and continue with the rebuild or will the rebuild fail entirely?
Edited by Tanglefoot69 - 4/16/11 at 5:24am
post #5150 of 7150
Quote:
Originally Posted by Tanglefoot69 View Post
I am new to RAID and was wondering if someone could recommend settings for my PERC 6i controller for a RAID 6 array initially consisting of 4x 2TB WD20EADS drives? The array will be used for storage of 90% HD videos and 10% SD video and music. I do plan to expand the array in the near future to the full 8 drives as supported by the 6i.

One other question.
If I have a drive drop out of the array and I rebuild the array and encounter a read error (URE, I think is the term), do I have the option to "ignore or skip" the read error (losing the file or data) and and continue with the rebuild or will the rebuild fail entirely?
Your RAID-6 with 4x 2TB drives will work fine, but don't expect performance to be stellar. You will only have two effective drives with 2 parity drives so your performance will be *at most* the equivalent of 2 drives, and in reality will probably be a little less due to overhead.

If you are planning to expand in the near future, I would honestly recommend that you simply setup the RAID-6 to the largest set of disks as possible up front. Although it is possible to expand the RAID-6, it isn't always a reliable process and I recommend that you backup all your data before performing such an operation. Whatever you do, understand that RAID redundancy is not a replacement for backups; consider it just a "early warning" system before you lose your data.

As for the URE, you do NOT have an option to ignore the error. However, it doesn't mean the drive will be marked as failed immediately upon encountering a URE. Remember, you have parity, so if the controller can reconstruct the missing data, it will do so. If you perform a consistency check, typically the URE sector will get re-allocated when possible. the controller will try to reconstruct the data, then re-write it back to the drive. at which time, if the sector is bad, the drive will reallocate the bad sector to a reserve sector. however, not all bad sectors in a drive are that obvious; some sectors are writeable but not always readable... and those are the most problematic because they don't get reallocated, but cause problems in RAID-5/6 arrays. Typically what causes a drive to fall out of the array is when it becomes unresponsive for a long period of time. This can happen if your software (file system driver, application accessing a file) keeps trying to re-read a URE and this causes the queue on the drive to fill up and become really busy; sometimes this reaches the point where the drive doesn't respond for a 1 minute or longer. if the drive reaches such a condition, the controller will assume the drive is dead and mark it failed and take it out of the array. When I'm performing recovery of a failed RAID-5/6 array, I usually use a tool that avoids re-reading URE areas and simply move on to the next file upon read failure; this allows me to keep the RAID-5/6 array up and running until i can recover as much data as possible. Then you can work on recovering the files that sit on the UREs.
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
TAIPEI
(10 items)
 
AURORA
(13 items)
 
 
MotherboardGraphicsRAMHard Drive
ASRock X99 Extreme11 EVGA GTX 980 Superclocked 32GB 8x4GB Corsair LPX Samsung XP941  
Hard DriveCoolingOSMonitor
Western Digital 3TB RE Noctua NH-D15 Fedora 21 Linux Samsung S27D590C 
PowerCase
Seasonic SS-1200XP Cooler Master Cosmos II 
CPUMotherboardGraphicsRAM
Dual Quad-core L5430 2.66Ghz 12mb cache Intel 5000 chipset ATI ES1000 64GB FBDIMM DDR2 PC2-5300 667Mhz 
Hard DriveOSPower
WD3000FYYZ PERC H700 w/ 512MB cache CentOS 7.2.1511 950W x2 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: RAID Controllers and Software
Overclock.net › Forums › Components › Hard Drives & Storage › RAID Controllers and Software › PERC 5/i RAID Card: Tips and Benchmarks