My setup (revelant info only):
ICH10R RAID controller (onboard)
1x 320GB HD as boot (Windows 7 x64)
4x 2TB HD Caviar Green on RAID5
-> with 4 primary partitions on it, one of them 4TB in size
WIndows started to give me BSOD. After a couple of tries to correct it, I decided to restore BIOS default configurations (my mistake).
One of the configurations on BIOS is the arrangement of your HDs (IDE, ACPI or RAID, I think).. The default value for this configuration on my motherboard is not RAID, so when I loaded default configurations and booted (Windows is on a spare HD), two out of four RAID HDs were "Non-member". After the freak feeling has passed, I started to Google and found this thread.
I started to read all pages 2 or 3 times to figure out how to proceed correctly, as there were succesful cases and unsuccesful cases (unfortunetly).
So, I did as mentioned by Zero4549 on post #65, except for the fact that I did not uninstallled Intel Rapid Store app, I just disabled his service under Windows Services configurations (press Start menu, type service and you will find it).
After marking the two still RAID members from CTRL+I boot option as " non-member" (making all 4 drives as Non-member), I created another RAID5 with exactly the same configuration as before (same name (easy to remember), chunk size (the default one: 64KB) and the drivers order (from top-down as far as I remeber)) and booted.
I run TestDisk following their guide here: http://www.cgsecurity.org/wiki/TestDisk_Step_By_Step
Everything was fine even with Quick Search, showing all my files (with P option).
5-10 minlater after write up partition table on TestDisk and booted, all my system was back to normal with no data loss.
The only anormal thing was that drive letters has changed, by you can change it as you desire.
Many thanks to Ictinike for the origial post, Zero4549 for this contribution and for everyone who enriched this thread!
I have a bit more convoluted setup than others posted here and am not seeming to have any luck with the recovery...
Primary OS drive is 128gb SSD
Storage array is IRST RAID 5 3x 2tb disks.
My OS is on the C drive but my Program Data and User Data are stored on the RAID array and thus I cannot successfully boot into Windows to run TestDisk. So I copied down the BootMed CD and have been trying to recover the RAID from there.
I have run TestDisk in bootmed and it is able to see the partitions, files and everything seems to be OK. But after selecting write and reboot I am unable to get Windows to recognize what should be the restored RAID array. In addition, when I re-boot into BootMed after the write the drives show as 3 unallocated units
Here is my drive selection for partition recovery:
Disk /dev/sda - 2000 GB / 1863 GiB - ATA ST2000DM001-9YN1
Disk /dev/sdb - 2000 GB / 1863 GiB - ATA ST2000DM001-9YN1
Disk /dev/sdc - 2000 GB / 1863 GiB - ATA ST2000DM001-9YN1
Disk /dev/sdd - 128 GB / 119 GiB - ATA M4-CT128M4SSD2
Disk /dev/mapper/isw_fiefdgfhf_data 4000 GB / 3736 GiB
Disk /dev/dm-0 - 4000 GB / 3726 GiB
I am able to see the RAID structure using both:
Disk /dev/mapper/isw_fiefdgfhf_data 4000 GB / 3736 GiB
Disk /dev/dm-0 - 4000 GB / 3726 GiB
So my question is, where should I be scanning/writing partition info?
Registered just to thank you! Took 5 hours out of my life, very significant ones, but this saved me.
Fortunately I had 4 spare drives, so I was able to do a test before going through this process. The stupid IRST tells you all data will be destroyed, when it doesn't do anything to the data. Scared me. Hence the test. The test recreated the error, I reassembled the array, ran the program, rebooted. Worked. I did it on the real data, it worked.
Thank you! I'm now going to move off the RAID0+1 that failed, and onto non-RAID disks using FlexRAID.
Following problem: I updated my bios, so my RAID 5, containing 5 x 2Tb discs was destroyed (3 non-member discs).
OS is on a different disk.
I followed the instructions here, so reset all drives and did a new Intel raid setup with my previous settings.
When I start Testdisk I see my old array and write the volume data.
Now here comes the problem: when I reboot my systems fails to load the bios!!!!!
I have a UEFI Bios on my Asus P8H67-I.
Prior to the loading of the Bios my Raid is detected: status normal.
(If I do not wirte any data for the Raid recovery, the system boots properly (status initializing, no intel RST installed on that time)
So what am I doing wrong???
I tried hot-plugin, so loaded win7 first, then connected the sata cables, but couldn't access my data (raid failure in intel RST).
Without doing hot-plug, I was not able to start windows with partitions "recovered" using testdisk.
Any idea what I'm doing wrong!?
Mainboard: P8H67-I Mainboard (UEFI Bios)
OS on a 128GB SSD
Intel Raid 5: 5x2TB
More than a week later I'm almost done. I was able to recover all my files using a recovery tool but had to extract all the data and save it to different drives.
Afterwards I had to initialize my raid again and recover all my data (~3TB) from a lot of old 250GB drives to which I saved my stuff before the "re-"intialization of my raid.
So to those who also fail to recover the raid, your data is still there, but you need to save I to different drives before you build a new raid.
Failed intel raid is very easy to sort out. Stick in a spare hard drive, install windows on it, boot to it, install intel software. Use intel software on spare hard drive to reset status of intel raid array to normal mode, reboot, should work perfectly normally fine again.
I've had my intel raid arrays do this to me multiple times, I always do this, it always fixes it (for me at least). I usually keep a spare hard drive around with windows installed on it that has drivers loaded for my i7 and the intel software loaded in case that happened again. I've recently moved away from the horrible intel raid solution to a hardware raid card (finally) but still, this is how I fixed it when I had intel raid in the past. Edited by kithylin - 12/22/12 at 3:34pm
[*] Intel ICH10R controller.
[*] 2 x 1TB HDDs, set up as 1 x 2TB RAID0 array. (BIOS Name and OS Label: "SYSTEM")
[*] 2 x 2TB HDDs, set up as 1 x 4TB RAID0 array. (BIOS Name and OS Label:"STORAGE")
"SYSTEM" is bootable and contains Windows, now Win8 Pro x64, but when the problem
originally occurred, it contained Win7 Ultimate x64. The whole array is partitioned as a single
"basic" MBR partitiion, "Drive C:".
"STORAGE" is not bootable, and, again, is partitioned as a single "basic" partition, (except that
it is GPT), "Drive D:".
When I first had this problem about 6 months ago, I had no idea what to do, so I just re-created
the array and re-installed Windows, etc, etc... Data loss was negligible, as I store everything of
any importance on "Drive D:".
The RAID error was then, and still is, with the "SYSTEM" array; the "STORAGE" array has never given
me any problems.
Some weeks later, the same thing happened again, so I decided to try something else.
This is what I did:
NOTE:"Power down" means shut machine down, and then switch off at the power supply, or the mains outlet.
Power machine down.
Disconnect "non-member" disks.
Power machine back up.
Check that BIOS sees new config.
Power machine down again.
Re-connect "non-member" disks.
Power machine back up again.
Check that BIOS sees the disks.
Watch RAID info and check whether or not error has gone away, (i.e., the RAID BIOS again sees the array as it was originally set up).
In my case, this has fixed it every time, and no data has been lost.
In my case, the two offending disks are on SATA #0 and #1, and the cables are marked accordingly, so that I won't connect them in the wrong order.
I have no way of determining whether the problem is with the disks themselves, the ICH10R controller, or one (or both) of the SATA channels on the mainboard. It's too hard to use a process of elimination, (especially when, as in my case, the fault is intermittent), because I don't have a second MB with the same controller available.
One thing I have noted, though, (please bear with me, because this is a little complicated):
I have encountered this problem, (I am almost certain of this), only when re-booting the machine after having booted from a Linux live CD.
When I shutdown from the Linux CD, I get the error mostly if I let the machine restart without powering down completely first (i.e., I tell Linux to "Restart" rather than "Shutdown."
There are, unfortunately, quite a few Windows programs, mainly AV or backup-type programs, that use a Linux live CD as rescue disks.
This is actually quite a problem for me, because, for some unknown reason, Linux cannot see my RAID0 arrays at all, only the individual disks! I have searched all over the net for help on this, but to no avail!
The only rescue disks I can use are Windows-based, and a warm reboot from a Windows-based disk never gives me any problem at all.
If anyone here could give me any ideas as to what I can do to fix this, I would be very grateful!
Anyway, I hope that the foregoing procedure may be of help to anyone else experiencing this problem.
Best regards to you all!
(Sydney, Australia) Edited by Chris4877 - 1/6/13 at 11:08pm
Hey All, this guide has been great. Thanks so much! I do have one problem with testdisk. After a reboot, the partition table doesn't seem to be fixed. It finds my partition right away and doesn't give me an errors when writing the table.... Anyone else run into this problem?