
On 4/04/2015 10:55 PM, Daniel Jitnah wrote:
I have a 1TB usb external drive used mostly for backup, and occasionally to run virtual machines stored on it. It is a Lacie Porche model of roughly 3-4 yrs and it has a Hitachi disk drive in it. It has 5 partitions of ~200gGb each.
I've heard that Lacie drives are overpriced and often give more troubles than the price tag would suggest should occur. Years ago, I was told that Sun purchased the /best/ Seagate drives and branded them as their own. If a drive didn't pass muster, it was not accepted by Sun ... of course that helped Sun keep their pricing high too. Chances are that just about any bulk supplier of drives, including Lacie, is going to be getting lesser quality drives than Seagate and WD use in their own products.
HDSentinel says it has been powered on for 212 days... which could be right, as it not on everyday, and says it is at 100% health status.
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
Since a few days it is consistently remounting in read-only mode when a large file is copied onto it (1gb+ files), meaning the copy fails. Small files do not seem to cause any problem. A CD image iso copies ok.
It could be temperature related, the more work the drive is doing for a sustained period might drive up the heat and error likelihood.
Running fsck shows no errors. But badblocks shows plenty of bad blocks. So far all of the bad blocks seem to be on a single partition with reiserfs. (admittedly I have not checked the disk completely - just that the first 30% of the other partitions (ext4) show no bad blocks, where as the reiserfs one shows badblocks very early in the check. Badblocks checks take a long time - btw, I don't remember why this one partition is reiserfs, and others ext4?)
I've seen interfaces that are more troublesome than other interfaces, same goes for USB cards/ports. USB should be so simple that it shouldn't cause any problems, but there is always someone selling something as cheaply as they can and not caring about the end user whom would be expected to just replace the part.
IMHO 212 days is not a lot! Could there be other reasons than "really badblocks" thats causing this problem? I would think that a badblock check is filesystem independent. OR is there any reason why a partition with reiserfs would be particularly prone to developing badblocks?
You can use various settings with badblocks to more quickly get a result. I generally use 4096 and 8192 for the "b" and "c" parameters as follows: time badblocks -nvs -b 4096 -c 8192 \ -o ${ofile1} /dev/sda \ 2>&1 |tee ${ofile2} & Oh and always run badblocks with a drive that has no mounted partitions whatsoever; but I'm sure you already know that.
Also any recommendation for a good 1 Tb USB external drive (powered is fine, it does not have to be portable).
1TB drives are not good value these days, generally. At the end of the day, most drives sold today come from Seagate or WDC, they own most brands. So, it's almost a duopoly, that isn't likely to produce the best outcomes ... unless they really compete strongly to keep each other as honest as possible. Cheers A.