
On Tue, 21 May 2013, "Trent W. Buck" <trentbuck@gmail.com> wrote:
Am I right in thinking they become slow/erratic/unusable because of the extra time sent seeking back and forth between the original track and the spare track -- or just repeatedly trying to read a not-quite-dead sector?
If you look at the contiguous IO performance of a brand new disk (which presumably has few remapped sectors) you will see a lot of variance in read times. The variance is so great that the occasional extra seek for a remapped sector is probably lost in the noise. Also I'd hope that the manufacturers do smart things about remapping. For example they could have reserved tracks at various parts of the disk instead of just reserving one spot and thus giving long seeks for remapped sectors.
AIUI the justification for "enterprise" drives is they're basically the same as normal drives, except their firmware gives up much faster. If they're in an array, that means mdadm can just get on with reading the sector from one of the other disks, reducing the overall latency.
One issue is the level of service that the users expect. If users are happy to accept some lack of performance when a disk is dying then there's less of a down-side to "desktop" drives. Also the "desktop" drives in addition to being cheaper also tend to be a lot bigger. Both the capacity and the price make it feasible to use greater levels of redundancy. For example a RAID-Z3 array of "desktop" disks is likely to give greater capacity and lower price than a RAID-5 of "enterprise" disks. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/