
On Sat, Feb 11, 2012 at 12:34:30PM +1100, Chris Samuel wrote:
On Saturday 11 February 2012 09:29:38 Craig Sanders wrote:
another annoying cause of disks being kicked from mdadm (and zfs and presumably btrfs arrays too) is disk read timeouts due to the drive sleeping.
Not just sleeping, "consumer" drives apparently try much harder to recover from dodgy sectors, up to 2 minutes for some drives so I'm told. "Enterprise" drives give up much quicker in the assumption that they're in a RAID array.
yep, that too. sleeping just makes it really obvious and frequent :(
Not surprisingly RAID code tends to not be very tolerant of disks that take so long to respond..
hence the reason for re-flashing the LSI 9211-8i and similar cards to "IT" mode so it's just a plain dumb HBA without the enterprise level TLER that's usually still in a RAID card's "JBOD" mode. HBA mode is perfect for mdadm, btrfs, zfs, and other software raid/raid-like things when using consumer grade drives. craig -- craig sanders <cas@taz.net.au>