OK, I seem to be back in business.
Here's what happened...
The md superblock didn't actually
disappear...because it was never created in the first place
. When I created the array I created it with the --build
option rather than the --create
write a RAID superblock to the drives, so of course once the machine is restarted, the RAID cannot be auto-detected and therefore will not start on boot. Additionally, I had placed two LVM volumes on this array and as (what seems to be) a side effect of not writing a RAID superblock to the array, the LVM metadata is visible on both /dev/md0 (the array) and
/dev/sda1 (the first drive in the array). This in turn has the effect of creating duplicate UUIDs when scanning for PVs and VGs, which means that the LVM volumes cannot be brought up.
Deactivate all LVs and VGs
lvchange -an <lv name>
vgchange -an <vg name>
Bring up the array but DO NOT destroy it or write a superblock to it (note: --build, --assume-clean
sudo mdadm --build /dev/md0 --assume-clean --level=0 --raid-devices=2 --chunk=256 /dev/sda1 /dev/sdb1
Add a filter to /etc/lvm/lvm.conf
to filter out all /dev/sd*
devices but accept /dev/md*
filter = [ "a|/dev/md*|", "r|/dev/sd*|" ]
Bring the VGs and LVs back up
lvchange -ay <lv name>
vgchange -ay <vg name>
I'm now copying the data to an external drive. I may re-create the array properly this time around (--create
, not --build
) or I may wait and pick up a couple more units and go RAID 5. Either way, I've learned something.
I would like to thank the bash gods for having such a long command line history. It's what made me realise what I'd done wrong when creating the array.