
On Fri, Apr 12, 2013 at 01:28:01AM +0000, James Harper wrote:
And just to recap, rebooting a RAID comprising of /dev/sd[abcd]3, when /dev/sda3 is being rebuilt, results in a boot that drops into the initramfs shell because it appears that mdadm tries to add /dev/sda3 first then rejects the other 3 disks because they say /dev/sda3 is inconsistent (which it is).
maybe try swapping sda and sdb so sda3 is good and sdb3 is inconsistent.
this may allow the system to boot properly and the raid resync to proceed.
I can get into the initramfs shell remotely (now that I have ipmi working properly, even if it doesn't work in grub) and sort it out so it boots, I'm just bothered by the situation where intervention is required at all.
BTW don't forget to make sure your system can boot with the drives swapped around. this may mean installing grub into the MBR on all drives or just changing the BIOS setting (or use the BIOS boot menu) to boot off the drive that used to be sda but is now sdb.
Yep, I have tested booting off sd[a-d] and they all work (current RAID situation excepted). I don't know how to tell though if the BIOS is reading the bootsector from (say) sdd but then grub is just reading from sda. I could pull sda but then the disks all shuffle down a letter and I still can't tell which one is being used for boot. Also, when putting in a brand new sda disk with no bootsector the bios just says can't boot and doesn't proceed to sdb which is the next in the boot sequence, which is a bit frustrating. This is where fakeraid wins over linux md. James