
On Fri, Apr 12, 2013 at 03:31:20PM +1000, Kevin wrote:
On Fri, Apr 12, 2013 at 3:17 PM, James Harper <james.harper@bendigoit.com.au> wrote:
This is where a lot of people get this wrong. Once the BIOS has succeeded in reading the bootsector from a boot disk it's committed. If the bootsector reads okay (even after a long time on a failing disk) but anything between the bootsector and the OS fails, your boot has failed. This 'anything between' includes the grub bootstrap, xen hypervisor, linux kernel, and initramfs, so it's a substantial amount of data to read from a disk that may be on its last legs. A good hardware RAID will have long since failed the disk by this point and booting will succeed.
i think we're talking about different things here. if you can tell the BIOS "don't boot from sda, boot from sdb instead" then it really doesn't matter how messed up sda is, the system's not going to use it, it's going to boot from sdb like you told it to.
My original argument in favour of hardware RAID was good BIOS boot support (implying that it still worked seamlessly even in the /dev/sda disk is partly dead case) You then contested that you could change the BIOS order manually and also that BIOS could also try sda, then sdb, etc. Changing the BIOS boot order manually is a kludge that you don't have to perform with hardware RAID, and my rant above was addressing the reasons why having the BIOS try sda then sdb etc isn't really solving the problem in some cases. If I'm using hardware RAID it's one less thing I have to worry about when doing a remote reboot. A good fakeraid implementation would also address this (and coreboot with linux md support would too!) James