
Hello all, I have a 1TB usb external drive used mostly for backup, and occasionally to run virtual machines stored on it. It is a Lacie Porche model of roughly 3-4 yrs and it has a Hitachi disk drive in it. It has 5 partitions of ~200gGb each. HDSentinel says it has been powered on for 212 days... which could be right, as it not on everyday, and says it is at 100% health status. Since a few days it is consistently remounting in read-only mode when a large file is copied onto it (1gb+ files), meaning the copy fails. Small files do not seem to cause any problem. A CD image iso copies ok. Running fsck shows no errors. But badblocks shows plenty of bad blocks. So far all of the bad blocks seem to be on a single partition with reiserfs. (admittedly I have not checked the disk completely - just that the first 30% of the other partitions (ext4) show no bad blocks, where as the reiserfs one shows badblocks very early in the check. Badblocks checks take a long time - btw, I don't remember why this one partition is reiserfs, and others ext4?) IMHO 212 days is not a lot! Could there be other reasons than "really badblocks" thats causing this problem? I would think that a badblock check is filesystem independent. OR is there any reason why a partition with reiserfs would be particularly prone to developing badblocks? Also any recommendation for a good 1 Tb USB external drive (powered is fine, it does not have to be portable). Cheers Daniel. PS: Lev if you read this, as I recall you bought a similar external drive shortly after I bought mine. Is yours still running good?

On Sat, 4 Apr 2015 11:55:00 AM Daniel Jitnah wrote:
I have a 1TB usb external drive used mostly for backup, and occasionally to run virtual machines stored on it. It is a Lacie Porche model of roughly 3-4 yrs and it has a Hitachi disk drive in it. It has 5 partitions of ~200gGb each.
The USB interface doesn't seem to support as much error reporting as SATA. I'm not sure how much that is due to the specifications of the USB mass storage interface and how much is due to implementations. But in any case directly connecting the drive by SATA may give you more information, for a sealed unit you can probably crack it open and find a SATA interface inside.
IMHO 212 days is not a lot! Could there be other reasons than "really badblocks" thats causing this problem? I would think that a badblock check is filesystem independent. OR is there any reason why a partition with reiserfs would be particularly prone to developing badblocks?
The NetApp research into drive failures (which covers hundreds of thousands of drives) indicates that there is a strong location correlation between errors. So it's not a partition issue, it's just a disk location issue and partitions are based on disk location. There's no reason why ReiserFS should be more susceptable to disk errors. As an aside does the entire drive go read-only or just the filesystem on that partition?
Also any recommendation for a good 1 Tb USB external drive (powered is fine, it does not have to be portable).
I recommend getting a USB-SATA caddy so you can use any SATA disk. That is REALLY handy for so many backup and recovery tasks. If you don't need portability then a USB-SATA caddy on your desk doesn't take much more space than a drive enclosed in a case. Also I recommend getting something larger. Even if you don't need more than 1TB now if the drive lasts for a few years you will probably find a need for more. 4TB SATA disks are getting cheap nowadays. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 04/04/15 23:45, Russell Coker wrote:
Also I recommend getting something larger. Even if you don't need more than 1TB now if the drive lasts for a few years you will probably find a need for more. 4TB SATA disks are getting cheap nowadays.
I recommend getting something smaller and have multiple drives in case of drive failure, but thats just my opinion. Damien

On Sat, 4 Apr 2015 12:48:06 PM Damien Zammit wrote:
On 04/04/15 23:45, Russell Coker wrote:
Also I recommend getting something larger. Even if you don't need more than 1TB now if the drive lasts for a few years you will probably find a need for more. 4TB SATA disks are getting cheap nowadays.
I recommend getting something smaller and have multiple drives in case of drive failure, but thats just my opinion.
http://cdn.msy.com.au/Parts/PARTS.pdf MSY has 1TB disks for $69 and 2TB for $99. 2TB is significantly better value for money and it's cheap enough that multiple drives isn't expensive. Also due to the incidence of small numbers of bad sectors that are correlated to disk location a RAID-1 array on a single disk will save you more often than you would expect. A 2TB disk with a RAID-1 on 2 partitions will be significantly less likely to lose data and doesn't cost much more. I have some backup disks with bad sectors that I use as BTRFS RAID-1, they work well for me. A total failure of a disk is very rare as is a failure where you get tens of thousands of errors. Most drive problems have dozens of errors. As an aside I had a BTRFS drive give 12,000+ read errors. Due to metadata duplication (basically RAID-1 for metadata) I could read most of the data off it. Even 12,000 read errors isn't that much from a 2TB disk, as long as the metadata duplication saves the root directory etc you don't lose much. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Daniel Jitnah wrote:
Hello all,
I have a 1TB usb external drive used mostly for backup, and occasionally to run virtual machines stored on it. It is a Lacie Porche model of roughly 3-4 yrs and it has a Hitachi disk drive in it. It has 5 partitions of ~200gGb each.
I had a 500GB Hitachi in an external USB enclosure; it started giving problems when connected to a new PCIe USB 3.0 adaptor; but problems seemed to go away when I supplied power to the hub; ( involved resoldering a 'dry' joint !) regards Rohan McLeod

Daniel Jitnah <djitnah@greenwareit.com.au> wrote:
HDSentinel says it has been powered on for 212 days... which could be right, as it not on everyday, and says it is at 100% health status.
Have you run a self-test with smartctl and checked the state of the drive with smartctl -a? I've never done this over a USB connection; others on the list should know whether it works in that case. The test is actually carried out by the drive's firmware. This would be my recommendation. Of course, you should make a backup elsewhere of all important data and act under the assumption that this drive may be dead soon.

On 4/04/2015 10:55 PM, Daniel Jitnah wrote:
I have a 1TB usb external drive used mostly for backup, and occasionally to run virtual machines stored on it. It is a Lacie Porche model of roughly 3-4 yrs and it has a Hitachi disk drive in it. It has 5 partitions of ~200gGb each.
I've heard that Lacie drives are overpriced and often give more troubles than the price tag would suggest should occur. Years ago, I was told that Sun purchased the /best/ Seagate drives and branded them as their own. If a drive didn't pass muster, it was not accepted by Sun ... of course that helped Sun keep their pricing high too. Chances are that just about any bulk supplier of drives, including Lacie, is going to be getting lesser quality drives than Seagate and WD use in their own products.
HDSentinel says it has been powered on for 212 days... which could be right, as it not on everyday, and says it is at 100% health status.
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
Since a few days it is consistently remounting in read-only mode when a large file is copied onto it (1gb+ files), meaning the copy fails. Small files do not seem to cause any problem. A CD image iso copies ok.
It could be temperature related, the more work the drive is doing for a sustained period might drive up the heat and error likelihood.
Running fsck shows no errors. But badblocks shows plenty of bad blocks. So far all of the bad blocks seem to be on a single partition with reiserfs. (admittedly I have not checked the disk completely - just that the first 30% of the other partitions (ext4) show no bad blocks, where as the reiserfs one shows badblocks very early in the check. Badblocks checks take a long time - btw, I don't remember why this one partition is reiserfs, and others ext4?)
I've seen interfaces that are more troublesome than other interfaces, same goes for USB cards/ports. USB should be so simple that it shouldn't cause any problems, but there is always someone selling something as cheaply as they can and not caring about the end user whom would be expected to just replace the part.
IMHO 212 days is not a lot! Could there be other reasons than "really badblocks" thats causing this problem? I would think that a badblock check is filesystem independent. OR is there any reason why a partition with reiserfs would be particularly prone to developing badblocks?
You can use various settings with badblocks to more quickly get a result. I generally use 4096 and 8192 for the "b" and "c" parameters as follows: time badblocks -nvs -b 4096 -c 8192 \ -o ${ofile1} /dev/sda \ 2>&1 |tee ${ofile2} & Oh and always run badblocks with a drive that has no mounted partitions whatsoever; but I'm sure you already know that.
Also any recommendation for a good 1 Tb USB external drive (powered is fine, it does not have to be portable).
1TB drives are not good value these days, generally. At the end of the day, most drives sold today come from Seagate or WDC, they own most brands. So, it's almost a duopoly, that isn't likely to produce the best outcomes ... unless they really compete strongly to keep each other as honest as possible. Cheers A.

On Sun, 5 Apr 2015, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
I wouldn't trust anything that only involves a single disk when given a choice. But when you do have to rely on a single disk using RAID-1 on that disk will significantly decrease the probability of data loss. This is why ZFS has the copies= option. As we are talking about replacing a ReiserFS filesystem support for Windows isn't a concern. A more significant issue is that a BTRFS filesystem created on Debian/Jessie can't be read with a kernel from Debian/Wheezy. But if you are concerned with hard read errors not data corruption then you could use Linux Software RAID-1 on a single disk. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Sun, Apr 05, 2015 at 01:19:46AM +1100, Russell Coker wrote:
On Sun, 5 Apr 2015, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
I wouldn't trust anything that only involves a single disk when given a choice. But when you do have to rely on a single disk using RAID-1 on that disk will significantly decrease the probability of data loss. This is why ZFS has the copies= option.
me either, to the point where i can't see the point of even bothering to backup to a single drive - it's just not reliable enough to deserve the label of "backup". IMO if you're going to do backup to disk, do it properly. it costs significantly more (i.e. at least two drives plus other hardware) but it's more than worth the cost and effort. That ASRock QC-5000 motherboard that Rick mentioned a week or so ago would make an ideal little ZFS NAS/backup-server. and at 15W, uses little enough power that it would cost bugger all to leave switched on and available on the network 24/7. it only has 4 SATA ports, but that's more than enough for a mirrored pair (i.e. like RAID-1) of drives. If more drives are required, you can pick up an 8-port SAS card on ebay for around $100 (not only superior to but much cheaper than multi-port SATA cards which often cost $300+ for only 4 ports - and you can plug SATA drives into a SAS port) pccasegear.com.au has the QC-5000-ITX for $145 and i've seen several other shops with the WIFI version of same for the same price - the linux driver for the WIFI card allows it to be used as an access point with hostap or just as a wifi client. http://www.pccasegear.com/index.php?main_page=product_info&cPath=138_1019&pr... pccasegear also has a quite nice looking mini-itx case with room for 6 x 3.5" drives (or 11 x 2.5" drives) for $99. doesn't include a power supply, but appears to take a standard ATX PSU. http://www.pccasegear.com/index.php?main_page=product_info&cPath=25_1119&pro... using a little 60GB SSD for linux + zfsonlinux (or maybe freebsd), and 8GB of RAM, that means you can have an excellent home NAS for around $350 (not including drives) that wipes the floor with crappy commercial NAS boxes that cost $800 or $1200 plus drives (built with the cheapest barely-adequate parts available to maximise profts and different product lines rather than to provide maximum utility & quality for least price). btw, ZFS works very nicely with samba and NFS, and they make for a convenient storage location for clonezilla backups of Windows, Mac, and Linux systems - and the ZFS server can also run a dhcp server as well as the TFTP and PXE-boot setup to have network-based debian (or whatever) installer, clonezilla, gparted, system-rescue-cd and other utilities. in fact, since it's a standard amd64 server running standard linux you can install whatever kind of server software you like on it - it's even powerful enough to run a few VMs. i have my zfs box set up to do this(*) and, while i don't need to use them frequently, having a network-bootable clonezilla or gparted has been a gnu-send :) on numerous occasions. and having a PXE-bootable debian-installer means i don't have to stuff around with CDs, DVDs, or USB sticks. (*) I pretty much duplicated the system i built at $previous_job to have a pxe-boot setup for gparted, d-i, sysrescd, and especially clonezilla for mass-producing windows & mac desktop machines for users. d-i was for the handful of postdocs etc who wanted linux desktops. although "duplicated the work setup" isn't entirely accurate - it's hard to say which came first, because I duplicated and improved my home PXE setup at work when i needed it, and then re-implemented some of the same ideas I came up with at work on my home setup - e.g. a set of scripts to download the latest clonezilla, extract it from the zip file, and generate the ipxe menu entries. and similar for gparted. i didn't bother with system-rescue-cd at home because i found i never used it much at work as both clonezilla and gparted make excellent rescue CDs in themselves. i use cz for almost everything, but gparted's graphical partition editor is useful when i need to do that kind of thing. craig -- craig sanders <cas@taz.net.au>

On Sun, Apr 05, 2015 at 03:28:07PM +1000, Craig Sanders wrote:
$350 (not including drives) that wipes the floor with crappy commercial NAS boxes that cost $800 or $1200 plus drives (built with the cheapest barely-adequate parts available to maximise profts and different product lines rather than to provide maximum utility & quality for least price).
s/different/differentiate/
btw, ZFS works very nicely with samba and NFS, and they make for a convenient storage location for clonezilla backups of Windows, Mac, and Linux systems - and the ZFS server can also run a dhcp server as well as
clonezilla can be used to make linux backups but IMO rsync is a better tool for the job. or 'zfs snapshot' and 'zfs send' if the source system is also running zfs. (when using clonezilla as a rescue CD, cz has rsync installed of course so can be used to restore rsync backups...and once you've restored the backup you can chroot into it and run grub-install to make it bootable again) craig -- craig sanders <cas@taz.net.au>

Just an update on this problem: It does NOT appear to be the usb external drive anymore! I just plugged the latter into my laptop and all worked perfectly - No badblocks and large file copy worked perfectly. So points to something on the original host. No time to investigate now if its a software or hardware error on the first host (Debian Wheezy host). Will do that later tonight!. (its unlikley to be an electrical fault on the particular usb socket, as I have tried swapping sockets before) Cheers Daniel. On 05/04/15 15:28, Craig Sanders wrote:
On Sun, Apr 05, 2015 at 01:19:46AM +1100, Russell Coker wrote:
On Sun, 5 Apr 2015, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
I wouldn't trust anything that only involves a single disk when given a choice. But when you do have to rely on a single disk using RAID-1 on that disk will significantly decrease the probability of data loss. This is why ZFS has the copies= option.
me either, to the point where i can't see the point of even bothering to backup to a single drive - it's just not reliable enough to deserve the label of "backup". IMO if you're going to do backup to disk, do it properly. it costs significantly more (i.e. at least two drives plus other hardware) but it's more than worth the cost and effort.
That ASRock QC-5000 motherboard that Rick mentioned a week or so ago would make an ideal little ZFS NAS/backup-server. and at 15W, uses little enough power that it would cost bugger all to leave switched on and available on the network 24/7.
it only has 4 SATA ports, but that's more than enough for a mirrored pair (i.e. like RAID-1) of drives. If more drives are required, you can pick up an 8-port SAS card on ebay for around $100 (not only superior to but much cheaper than multi-port SATA cards which often cost $300+ for only 4 ports - and you can plug SATA drives into a SAS port)
pccasegear.com.au has the QC-5000-ITX for $145 and i've seen several other shops with the WIFI version of same for the same price - the linux driver for the WIFI card allows it to be used as an access point with hostap or just as a wifi client.
http://www.pccasegear.com/index.php?main_page=product_info&cPath=138_1019&pr...
pccasegear also has a quite nice looking mini-itx case with room for 6 x 3.5" drives (or 11 x 2.5" drives) for $99. doesn't include a power supply, but appears to take a standard ATX PSU.
http://www.pccasegear.com/index.php?main_page=product_info&cPath=25_1119&pro...
using a little 60GB SSD for linux + zfsonlinux (or maybe freebsd), and 8GB of RAM, that means you can have an excellent home NAS for around $350 (not including drives) that wipes the floor with crappy commercial NAS boxes that cost $800 or $1200 plus drives (built with the cheapest barely-adequate parts available to maximise profts and different product lines rather than to provide maximum utility & quality for least price).
btw, ZFS works very nicely with samba and NFS, and they make for a convenient storage location for clonezilla backups of Windows, Mac, and Linux systems - and the ZFS server can also run a dhcp server as well as the TFTP and PXE-boot setup to have network-based debian (or whatever) installer, clonezilla, gparted, system-rescue-cd and other utilities. in fact, since it's a standard amd64 server running standard linux you can install whatever kind of server software you like on it - it's even powerful enough to run a few VMs.
i have my zfs box set up to do this(*) and, while i don't need to use them frequently, having a network-bootable clonezilla or gparted has been a gnu-send :) on numerous occasions. and having a PXE-bootable debian-installer means i don't have to stuff around with CDs, DVDs, or USB sticks.
(*) I pretty much duplicated the system i built at $previous_job to have a pxe-boot setup for gparted, d-i, sysrescd, and especially clonezilla for mass-producing windows & mac desktop machines for users. d-i was for the handful of postdocs etc who wanted linux desktops.
although "duplicated the work setup" isn't entirely accurate - it's hard to say which came first, because I duplicated and improved my home PXE setup at work when i needed it, and then re-implemented some of the same ideas I came up with at work on my home setup - e.g. a set of scripts to download the latest clonezilla, extract it from the zip file, and generate the ipxe menu entries. and similar for gparted. i didn't bother with system-rescue-cd at home because i found i never used it much at work as both clonezilla and gparted make excellent rescue CDs in themselves. i use cz for almost everything, but gparted's graphical partition editor is useful when i need to do that kind of thing.
craig

On 05/04/15 15:44, Daniel Jitnah wrote:
Just an update on this problem:
It does NOT appear to be the usb external drive anymore! I just plugged the latter into my laptop and all worked perfectly - No badblocks and large file copy worked perfectly.
Confirming that the problem is NOT a USB external hard drive problem. I have successfully read/written large files several times onto the USB drive using 4 different distro installations with no error and no badblocks shown in any of those distros. I have used Ubuntu 13.10, Linux Mint Debian and a fresh install of Debian 7 (latest) on the same original host and also attached the USB disk to a laptop with UB 14.04). All worked fine. Good news is the USB drive looks ok and I have not lost any data. Bad news is I got no idea where to look for the original problem and why it came up in the first place! The actual error is "Error splicing file ..." and there are references to this error on the web, but no solution. Of possible interest, if I connect the USB drive directly to a KVM guest VM with Debian 7 running on the original host, the drive works fine, (except for a serious slow down as one would expect.) So any ideas where to look for a fix for original problem would be nice?. But its not critical anymore. Cheers, Daniel.
So points to something on the original host. No time to investigate now if its a software or hardware error on the first host (Debian Wheezy host). Will do that later tonight!. (its unlikley to be an electrical fault on the particular usb socket, as I have tried swapping sockets before)
Cheers Daniel.
On 05/04/15 15:28, Craig Sanders wrote:
On Sun, Apr 05, 2015 at 01:19:46AM +1100, Russell Coker wrote:
On Sun, 5 Apr 2015, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
A drive can fail day 1, day 90 or day 10,000 -- but in the end, they all fail eventually. You need multiple copies of data on multiple drives, or perhaps you'll get by with Russell's idea of using RAID1 on BTRFS, but I wouldn't trust that solution and it is limited to being used with Linux ... good or bad. You can use ext4 anywhere, even on Windows with the right drivers.
I wouldn't trust anything that only involves a single disk when given a choice. But when you do have to rely on a single disk using RAID-1 on that disk will significantly decrease the probability of data loss. This is why ZFS has the copies= option.
me either, to the point where i can't see the point of even bothering to backup to a single drive - it's just not reliable enough to deserve the label of "backup". IMO if you're going to do backup to disk, do it properly. it costs significantly more (i.e. at least two drives plus other hardware) but it's more than worth the cost and effort.
That ASRock QC-5000 motherboard that Rick mentioned a week or so ago would make an ideal little ZFS NAS/backup-server. and at 15W, uses little enough power that it would cost bugger all to leave switched on and available on the network 24/7.
it only has 4 SATA ports, but that's more than enough for a mirrored pair (i.e. like RAID-1) of drives. If more drives are required, you can pick up an 8-port SAS card on ebay for around $100 (not only superior to but much cheaper than multi-port SATA cards which often cost $300+ for only 4 ports - and you can plug SATA drives into a SAS port)
pccasegear.com.au has the QC-5000-ITX for $145 and i've seen several other shops with the WIFI version of same for the same price - the linux driver for the WIFI card allows it to be used as an access point with hostap or just as a wifi client.
http://www.pccasegear.com/index.php?main_page=product_info&cPath=138_1019&pr...
pccasegear also has a quite nice looking mini-itx case with room for 6 x 3.5" drives (or 11 x 2.5" drives) for $99. doesn't include a power supply, but appears to take a standard ATX PSU.
http://www.pccasegear.com/index.php?main_page=product_info&cPath=25_1119&pro...
using a little 60GB SSD for linux + zfsonlinux (or maybe freebsd), and 8GB of RAM, that means you can have an excellent home NAS for around $350 (not including drives) that wipes the floor with crappy commercial NAS boxes that cost $800 or $1200 plus drives (built with the cheapest barely-adequate parts available to maximise profts and different product lines rather than to provide maximum utility & quality for least price).
btw, ZFS works very nicely with samba and NFS, and they make for a convenient storage location for clonezilla backups of Windows, Mac, and Linux systems - and the ZFS server can also run a dhcp server as well as the TFTP and PXE-boot setup to have network-based debian (or whatever) installer, clonezilla, gparted, system-rescue-cd and other utilities. in fact, since it's a standard amd64 server running standard linux you can install whatever kind of server software you like on it - it's even powerful enough to run a few VMs.
i have my zfs box set up to do this(*) and, while i don't need to use them frequently, having a network-bootable clonezilla or gparted has been a gnu-send :) on numerous occasions. and having a PXE-bootable debian-installer means i don't have to stuff around with CDs, DVDs, or USB sticks.
(*) I pretty much duplicated the system i built at $previous_job to have a pxe-boot setup for gparted, d-i, sysrescd, and especially clonezilla for mass-producing windows & mac desktop machines for users. d-i was for the handful of postdocs etc who wanted linux desktops.
although "duplicated the work setup" isn't entirely accurate - it's hard to say which came first, because I duplicated and improved my home PXE setup at work when i needed it, and then re-implemented some of the same ideas I came up with at work on my home setup - e.g. a set of scripts to download the latest clonezilla, extract it from the zip file, and generate the ipxe menu entries. and similar for gparted. i didn't bother with system-rescue-cd at home because i found i never used it much at work as both clonezilla and gparted make excellent rescue CDs in themselves. i use cz for almost everything, but gparted's graphical partition editor is useful when i need to do that kind of thing.
craig
_______________________________________________ luv-talk mailing list luv-talk@luv.asn.au http://lists.luv.asn.au/listinfo/luv-talk

On Mon, 6 Apr 2015, Daniel Jitnah <djitnah@greenwareit.com.au> wrote:
So any ideas where to look for a fix for original problem would be nice?. But its not critical anymore.
Buy a second USB external storage device. It's handy to have anyway and when you have 2 of each item testing becomes a lot easier. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 6/04/15 8:04 PM, Daniel Jitnah wrote:
On 05/04/15 15:44, Daniel Jitnah wrote:
Just an update on this problem:
It does NOT appear to be the usb external drive anymore! I just plugged the latter into my laptop and all worked perfectly - No badblocks and large file copy worked perfectly.
Confirming that the problem is NOT a USB external hard drive problem.
I have no clue on where an "Error splicing file..." would come from. But I had something that matches some of your other symptoms - large file accesses failing under one distro, okay under another - but this was real file corruption, and an internal disk. Years ago I had an internal disk giving corrupted large files, which turned out to be a RAM problem which *only* showed up on large file accesses. Intermittent RAM fault seemed to have fallen on statically allocated kernel buffers which only got accessed on large file writes (and may be reads, can't remember). Douglas
I have successfully read/written large files several times onto the USB drive using 4 different distro installations with no error and no badblocks shown in any of those distros. I have used Ubuntu 13.10, Linux Mint Debian and a fresh install of Debian 7 (latest) on the same original host and also attached the USB disk to a laptop with UB 14.04). All worked fine.
Good news is the USB drive looks ok and I have not lost any data. Bad news is I got no idea where to look for the original problem and why it came up in the first place!
The actual error is "Error splicing file ..." and there are references to this error on the web, but no solution.
Of possible interest, if I connect the USB drive directly to a KVM guest VM with Debian 7 running on the original host, the drive works fine, (except for a serious slow down as one would expect.)
So any ideas where to look for a fix for original problem would be nice?. But its not critical anymore.
Cheers, Daniel.

On Mon, 6 Apr 2015 01:42:33 PM Douglas Ray wrote:
Years ago I had an internal disk giving corrupted large files, which turned out to be a RAM problem which only showed up on large file accesses. Intermittent RAM fault seemed to have fallen on statically allocated kernel buffers which only got accessed on large file writes (and may be reads, can't remember).
Filesystems like BTRFS and ZFS make it easier to detect these problems. I had one system that gave unusual BTRFS consistency errors on 2 occasions and the BTRFS developers suggested testing memory. Memtest86+ reported errors and one of the DIMMs had Memtest86+ errors when moved to another system. I replaced that DIMM and things worked a lot better afterwards. Filesystems that don't have checksums on all data and metadata (IE everything other than BTRFS and ZFS) will just get corrupted files when such things happen. As an aside, the ZFS "resilver" operation can really mess things up if you run it when you have memory errors. ECC RAM is a really good thing. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

Daniel Jitnah wrote:
On 05/04/15 15:44, Daniel Jitnah wrote:
Just an update on this problem:
It does NOT appear to be the usb external drive anymore! I just plugged the latter into my laptop and all worked perfectly - No badblocks and large file copy worked perfectly. Confirming that the problem is NOT a USB external hard drive problem.
................snip
So any ideas where to look for a fix for original problem would be nice?. But its not critical anymore. Daniel; just to repeat an implied query, in previous email of mine. How have you have eliminated a fault in the integrated / addon USB hardware ?
regards Rohan McLeod

Oh and one other thing. Drives had inbuilt redundancy, to a point, errors that are encountered when reading or writing data often get re-mapped without the reader/writer being any the wiser -- the drive's firmware itself generally hides this /feature/ it may or may not be reflected in drive stats from smartctl requests. It's been said before that with drives of today's capacity, with all the automatic behind the scenes error correction that takes place so frequently, it's a miracle that drives work at all. This is exactly why you can't rely on ONE drive, it would be foolish to do so with data that cannot be replaced. A.
participants (8)
-
Andrew McGlashan
-
Craig Sanders
-
Damien Zammit
-
Daniel Jitnah
-
Douglas Ray
-
Jason White
-
Rohan McLeod
-
Russell Coker