
Robin Humble writes:
it's best to think of disks as analogue devices pretending to be digital. often they can't read a marginal sector one day and then it's fine again the next day. some sectors come and go like this indefinitely, while others are bad enough that they're remapped and you never have an issue with them again. if the disk as a whole is bad enough then you run out of spare sectors to do remapping with, and the disk is dead. in my experience disks usually become unusable (slow, erratic, hangs drivers etc.) before they run out of spare sectors.
with todays disk capacities this is just what you have to expect and software needs to be able to deal with it.
Am I right in thinking they become slow/erratic/unusable because of the extra time sent seeking back and forth between the original track and the spare track -- or just repeatedly trying to read a not-quite-dead sector?
AIUI the justification for "enterprise" drives is they're basically the same as normal drives, except their firmware gives up much faster. If they're in an array, that means mdadm can just get on with reading the sector from one of the other disks, reducing the overall latency.
Not that I've ever seen that myself -- I can't justify paying an order more for what ought to be a simple sdparm tweak :-/
It's more complicated than that. Enterprise drives will be less likely to move the heads out of the way to reduce drag and reduce power consumption by a tiny bit. They are more inclined to automatically spin down when idle too. All those things typically increase wear and tear when the drivers are used in an enterprise environment. But you're right in that it's probably mostly just a firmware difference... I wonder if anyone has ever attempted to force an enterprise firmware onto a "green" drive... James