I have just had two drives failed in a server today. One is mostly part of a RAID0 set
(which is in turn part of a DRBD, so we're still good) and a small partition that is
part of a RAID1, which hasn't been failed (errors are about 1.3TB along a 2TB disk).
The other is one I was testing, it wasn't particularly new and doesn't really
Both drives have logged read errors under Linux kernel, both report drive is healthy
status (SMART overall-health self-assessment test result: PASSED), and both say
"Completed: read failure" almost immediately when I do a SMART self test (short
test or long).
I don't really have any trouble with the fact that two drives have failed, but I'm
really surprised that SMART still reports that the drive is good when it is clearly not...
what's with that?