
Hi, I'm running fsck on my root partition and getting lots of io errors as shown in the link. What should I do? Is my disk dead or can it still be used? https://www.dropbox.com/s/d3m019akv70xihs/2015-12-17%2000.50.59.jpg?dl=0 Thanks, David

Quoting David Zuccaro via luv-main (luv-main@luv.asn.au):
Hi, I'm running fsck on my root partition and getting lots of io errors as shown in the link.
What should I do? Is my disk dead or can it still be used?
https://www.dropbox.com/s/d3m019akv70xihs/2015-12-17%2000.50.59.jpg?dl=0
I would say your disk is in the process of going bad. If you have good, current backups, replace the disk and rebuild. If not, you can try using Recovery is Possible or System Rescue CD. The bundled tools will tell you more about the state of your hardware and perhaps permit making a good copy. -- Cheers, "If you see a snake, just kill it. Rick Moen Don't appoint a committee on snakes." rick@linuxmafia.com -- H. Ross Perot McQ! (4x80)

On 17/12/15 04:42, Rick Moen via luv-main wrote:
I would say your disk is in the process of going bad. If you have good, current backups, replace the disk and rebuild. If not, you can try using Recovery is Possible or System Rescue CD. The bundled tools will tell you more about the state of your hardware and perhaps permit making a good copy. Thanks Rick, I actually rsync everything to an local external drive daily 8-)

On Thu, Dec 17, 2015 at 1:01 AM, David Zuccaro via luv-main < luv-main@luv.asn.au> wrote:
On 17/12/15 04:42, Rick Moen via luv-main wrote:
I would say your disk is in the process of going bad. If you have good, current backups, replace the disk and rebuild. If not, you can try using Recovery is Possible or System Rescue CD. The bundled tools will tell you more about the state of your hardware and perhaps permit making a good copy.
Thanks Rick, I actually rsync everything to an local external drive daily 8-)
Well I hope you are not doing it with the -delete option in place because if you are it will faithfully remove from your backup set everything that has gone missing from your source drive.
Good luck.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- The Bundys, Cliven, Ted and Al. Great guys to look up to.

On 17/12/15 05:09, Robert Parker wrote:
Well I hope you are not doing it with the -delete option in place because if you are it will faithfully remove from your backup set everything that has gone missing from your source drive.
Good luck.
Thanks Robert, I actually do use that option to prevent the backup disk from filling up. fsck has been running all night so looks like the disk is bad. David

Quoting David Zuccaro (david.zuccaro@optusnet.com.au):
I actually do use that option to prevent the backup disk from filling up. fsck has been running all night so looks like the disk is bad.
You might want to give Recovery is Possible (RIPLInux) a try, http://sourceforge.net/projects/riplinuxmeta4s/ . (I notice that developer Kent Robotti's site is gone, and code has not been maintained since 2011.) You could also try using the ddrescue utility on the other recovery disk I mentioned, SystemRescueCd. http://www.sysresccd.org/SystemRescueCd_Homepage Both disks copy data from failing drives causing them as little additional damage as possible.

Rick Moen via luv-main <luv-main@luv.asn.au> writes:
You could also try using the ddrescue utility on the other recovery disk [...]
Note there are two ddrescue's -- the original and GNU. I switched to GNU because the original ddrescue isn't in Debian 7+. I don't have an opinion about which is better.

On Thu, Dec 17, 2015 at 01:09:46AM +0700, Robert Parker via luv-main wrote:
Thanks Rick, I actually rsync everything to an local external drive daily
Well I hope you are not doing it with the -delete option in place because
if you are it will faithfully remove from your backup set everything that has gone missing from your source drive.
more importantly, using rsync's --delete option won't leave cruft from uninstalled packages and other deleted files strewn all over your filesystem. i made the mistake of forgetting to use --delete on an rsync transfer of one of my systems' OS disk to a new disk once, didn't discover it until after i'd made the final swap to the new disk. left me with an enormous mess that took over a year of gradual cleanups plus a final concerted effort involving find and several custom scripts to process /var/lib/dpkg/info/* files to tidy up the mess. even now i'm not 100% sure i've got it all. and yes, all that cruft did cause numerous problems. inevitable, really, with extra crap like partial packages, obsolete libs and binaries. don't try this at home, it'll suck. craig -- craig sanders <cas@taz.net.au>

On Thu, Dec 17, 2015 at 11:42:58AM +1100, Craig Sanders via luv-main wrote:
more importantly, using rsync's --delete option won't leave cruft from uninstalled packages and other deleted files strewn all over your filesystem.
this applies to upgraded packages too. without --delete, rsync, upgrade, rsync will leave parts of the OLD versions of the packages on the rsync target. craig -- craig sanders <cas@taz.net.au>

I'd like to add that a mirror isn't a reliable backup; regardless of that mirror being RAID, or scheduled synchronisation. It may mitigate against data-loss in certain scenarios of catastrophic hardware failure, but wont protect data from accidental (or intentional) deletion, and/or file corruption. I'm not sure how rsync handles reads with bad blocks; but with certain flags (see --partial) it may also corrupt the target file?

Quoting Joel W. Shea via luv-main (luv-main@luv.asn.au):
I'd like to add that a mirror isn't a reliable backup; regardless of that mirror being RAID, or scheduled synchronisation.
I'll quote my own bit from 2002: The topic of data backup herewith returns, like a troublesome data set — occasioned by my addressing the matter on a mailing list, and again referring people to your [my friend Karsten Self's] Linux Backups mini-FAQ. Comments will concern that FAQ and surrounding cosmic truths. Cosmic truth #1: Part of the reason it's a FAQ topic is that people are confused about what a backup is, and what it is not. o redundant storage: E.g., RAID1, RAID5. o archival storage: E.g., migrating a billed-out project's files from the company file server to CDRs. o backup: Technical means to make your data survive Thor hitting your server with Mjolnir. Or to get back the directory Moriarty deleted from it last Thursday. These are _very_ distinct concepts, yet many people have them hopelessly confused, and call all of them "backup". A lot of the people with dumb opinions on the subject have no friggin' clue what it takes to foil Thor and Moriarty: They think quantity one duplicate copy, stored within Mjolnir distance of the server, and overwritten every Saturday night with a fresh data set, is "backup". 'Backup Fallacies / Pitfalls' on http://linuxmafia.com/faq/Admin/ -- Cheers, (morganj): 0 is false and 1 is true, correct? Rick Moen (alec_eso): 1, morganj rick@linuxmafia.com (morganj): bastard. McQ! (4x80) -- seen on IRC

On Thu, 17 Dec 2015 02:23:12 PM Joel W. Shea via luv-main wrote:
I'd like to add that a mirror isn't a reliable backup; regardless of that mirror being RAID, or scheduled synchronisation.
There are a variety of backup systems that start with rsync and manage trees of links. It's not difficult to write your own, rsync the files, run "cp -al" to make a copy with hard links and use today's date in the directory name, and then delete backup directories that are too old. What I'm doing for backups at the moment is to rsync to files on a BTRFS filesystem and then create a snapshot. If I want to retrieve a file that was deleted then I just copy it from a suitably old snapshot. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On 17.12.15 21:33, Russell Coker via luv-main wrote:
There are a variety of backup systems that start with rsync and manage trees of links. It's not difficult to write your own, rsync the files, run "cp -al" to make a copy with hard links and use today's date in the directory name, and then delete backup directories that are too old.
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just include -aH in my rsync options, the -H to preserve hard links. Seems to work. It's almost enough to make one wonder why it's a little bit fiddly to make rsync "just shuddup & copy the listed bits of the filesystem _as_is_, so they can be restored unaltered."
What I'm doing for backups at the moment is to rsync to files on a BTRFS filesystem and then create a snapshot. If I want to retrieve a file that was deleted then I just copy it from a suitably old snapshot.
A bit more basic here. Just a rsync -Hauv to one of several flash sticks, then a diff -qr to confirm the copy's OK, and show any deletions which should be done, usually 0-3 per backup. But then my precious data is diminutive by most standards. Most important is that they come with me off-site. Erik

On Thu, 17 Dec 2015 11:06:10 PM Erik Christiansen via luv-main wrote:
On 17.12.15 21:33, Russell Coker via luv-main wrote:
There are a variety of backup systems that start with rsync and manage trees of links. It's not difficult to write your own, rsync the files, run "cp -al" to make a copy with hard links and use today's date in the directory name, and then delete backup directories that are too old.
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just include -aH in my rsync options, the -H to preserve hard links. Seems to work.
rsync copies all the files. Then you do something like the following to preserve a version of that tree before you do the next rsync. cp -al current 2015-12-18 -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Thu, 17 Dec 2015 11:06:10 PM Erik Christiansen via luv-main wrote:
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just include -aH in my rsync options, the -H to preserve hard links. Seems to work.
The idea is to have snapshots over time, rather than a single snapshot. For instance as implemented in: http://rsnapshot.org/ All the best, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On 18.12.15 22:45, Chris Samuel via luv-main wrote:
On Thu, 17 Dec 2015 11:06:10 PM Erik Christiansen via luv-main wrote:
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just include -aH in my rsync options, the -H to preserve hard links. Seems to work.
The idea is to have snapshots over time, rather than a single snapshot. For instance as implemented in:
Thanks Chris, that was not in doubt. What I didn't realise was that the links appear to be between unchanged files across backup versions, to save space. (Whereas my local concern has only been to ensure that the backup is not distorted by failure to preserve the fs structure.) It's around 8 years since I used an HP tape carousel for daily + weekly backups at work. Not doing that any more, so have devolved to just a couple of rotating snapshots, with no need to link between them to save space. Erik

Erik Christiansen via luv-main <luv-main@luv.asn.au> writes:
On 18.12.15 22:45, Chris Samuel via luv-main wrote:
On Thu, 17 Dec 2015 11:06:10 PM Erik Christiansen via luv-main wrote:
I'm not grokking the benefit of doing the rsync _and_ a "cp -al".
The idea is to have snapshots over time, rather than a single snapshot. For instance as implemented in: http://rsnapshot.org/
[...] I didn't realise was that the links appear to be between unchanged files across backup versions, to save space.
It's around 8 years since I used an HP tape carousel for daily + weekly backups at work. Not doing that any more, so have devolved to just a couple of rotating snapshots, with no need to link between them to save space.
The active ingredient of rsnapshot is "rsync --link-dest", if you want to roll your own rsnapshot replacement. rsnapshot config mandates annoying literal tabs, has a directory structure that's not directly compatible with samba's Shadow Copy modules, and doesn't correctly handle a file being deleted, then readded (i.e. pass --link-dest to ALL snapshots, not just the last one). Or see btrfs/ZFS snapshots. These operate per block rather than per inode, so should be more effective and less failure-prone in edge cases (hint, there's an upper limit on hard link count). One thing rsnapshot does reasonably well is faking multiple tape rotations within the snapshot set. e.g. you say "1 yearly, 2 monthlies, and 7 dailies", and it works out which snapshots to expire. I don't know how to do that in btrfs/zfs land.

On Mon, 21 Dec 2015 11:25:29 AM Trent W. Buck via luv-main wrote:
One thing rsnapshot does reasonably well is faking multiple tape rotations within the snapshot set. e.g. you say "1 yearly, 2 monthlies, and 7 dailies", and it works out which snapshots to expire. I don't know how to do that in btrfs/zfs land.
Just write a shell script to delete the ones you don't want. I have written scripts for ZFS and BTRFS to have a specified number of snapshots stored on a less than daily basis (hourly or every 15 minutes) and a specified number of daily snapshots. You could write scripts to have yearly, monthly, etc. It's just a bit of extra shell scripting. However the benefits are less (in most cases the vast majority of data that's been around for a couple of months will be around for several years). Disk space is all that limits you. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Mon, Dec 21, 2015 at 11:25:29AM +1100, Trent W. Buck via luv-main wrote:
The active ingredient of rsnapshot is "rsync --link-dest", if you want to roll your own rsnapshot replacement.
rsnapshot config mandates annoying literal tabs, has a directory structure that's not directly compatible with samba's Shadow Copy modules, and doesn't correctly handle a file being deleted, then readded (i.e. pass --link-dest to ALL snapshots, not just the last one).
That's the way my scripts work, except I was not aware you could now have multiple --link-dest arguments. Thanks for pointing that out. The only downside to using --link-dest is that changes to the backed up file permissions and ownership can get lost and should be tracked separately if needed. At least, I vaguely recall noticing that being a problem when I write the scripts years ago. -Adam

On Mon, 21 Dec 2015 11:25:29 AM Trent W. Buck via luv-main wrote:
Or see btrfs/ZFS snapshots. These operate per block rather than per inode, so should be more effective and less failure-prone in edge cases (hint, there's an upper limit on hard link count).
Yeah, I use both rsnapshot backups on ext4 and btrfs snapshots (both on external drives) and where I rsync with --inplace --no-whole-file for the btrfs side. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

On Mon, Dec 21, 2015 at 11:25:29AM +1100, Trent W. Buck via luv-main wrote:
One thing rsnapshot does reasonably well is faking multiple tape rotations within the snapshot set. e.g. you say "1 yearly, 2 monthlies, and 7 dailies", and it works out which snapshots to expire. I don't know how to do that in btrfs/zfs land.
personally, i rolled my own simple snapshot rotation script for zfs, but One Of These Days i'll probably switch to either zfsnap: Package: zfsnap Version: 1.11.1-3 Installed-Size: 48 Maintainer: John Goerzen <jgoerzen@complete.org> Architecture: all Depends: zfs-fuse | zfsutils | zfs, bc Description-en: Automatic snapshot creation and removal for ZFS zfSnap is a simple sh script to make rolling zfs snapshots with cron. The main advantage of zfSnap is it's written in 100% pure /bin/sh so it doesn't require any additional software to run. . zfSnap keeps all information about snapshot in snapshot name. . zfs snapshot names are in the format of Timestamp--TimeToLive. . Timestamp includes the date and time when the snapshot was created and TimeToLive (TTL) is the amount of time for the snapshot to stay alive before it's ready for deletion. Description-md5: 43c80483bf622b9e3c64221fe60f1f09 Homepage: https://github.com/graudeejs/zfSnap there are several similar snapshotting tools available, easily found with googling for 'zfs snapshot'. BTW, see also simplesnap: Package: simplesnap Version: 1.0.3 Maintainer: John Goerzen <jgoerzen@complete.org> Architecture: all Depends: zfs-fuse | zfsutils | zfs, liblockfile-bin Suggests: zfsnap Description-en: Simple and powerful network transmission of ZFS snapshots simplesnap is a simple way to send ZFS snapshots across a net- work. Although it can serve many purposes, its primary goal is to manage backups from one ZFS filesystem to a backup filesystem also running ZFS, using incremental backups to minimize network traffic and disk usage. . simplesnap it is designed to perfectly compliment snapshotting tools, permitting rotating backups with arbitrary retention periods. It lets multiple machines back up a single target, lets one machine back up multiple targets, and keeps it all straight. . simplesnap is easy; there is no configuration file needed. One ZFS property is available to exclude datasets/filesystems. ZFS datasets are automatically discovered on machines being backed up. . simplesnap is robust in the face of interrupted transfers, and needs little help to keep running. . nlike many similar tools, simplesnap does not require full root access to the machines being backed up. It runs only a small wrapper as root, and the wrapper has only three commands it implements. Homepage: https://github.com/jgoerzen/simplesnap craig -- craig sanders <cas@taz.net.au>

Craig Sanders via luv-main <luv-main@luv.asn.au> writes:
On Mon, Dec 21, 2015 at 11:25:29AM +1100, Trent W. Buck via luv-main wrote:
One thing rsnapshot does reasonably well is faking multiple tape rotations within the snapshot set. e.g. you say "1 yearly, 2 monthlies, and 7 dailies", and it works out which snapshots to expire. I don't know how to do that in btrfs/zfs land.
personally, i rolled my own simple snapshot rotation script for zfs,
Yeah, my implied caveat above was "without each sysadmin having to write the wheel from scratch", since it's a flipping obvious task. I've sat down to write a generic function for this[0] in the past, but gotten distracted before getting something coherent. [0] as in, you feed it a list of dates and a rotation policy, and it spits out the list of dates to delete.
but One Of These Days i'll probably switch to either zfsnap [...] BTW, see also simplesnap: [...]
Cool, I see Goerzen is still trucking in that space :-)

On Thu, Dec 17, 2015 at 7:42 AM, Craig Sanders via luv-main < luv-main@luv.asn.au> wrote:
On Thu, Dec 17, 2015 at 01:09:46AM +0700, Robert Parker via luv-main wrote:
Thanks Rick, I actually rsync everything to an local external drive daily
Well I hope you are not doing it with the -delete option in place because
if you are it will faithfully remove from your backup set everything that has gone missing from your source drive.
more importantly, using rsync's --delete option won't leave cruft from uninstalled packages and other deleted files strewn all over your filesystem.
i made the mistake of forgetting to use --delete on an rsync transfer of one of my systems' OS disk to a new disk once, didn't discover it until after i'd made the final swap to the new disk. left me with an enormous mess that took over a year of gradual cleanups plus a final concerted effort involving find and several custom scripts to process /var/lib/dpkg/info/* files to tidy up the mess. even now i'm not 100% sure i've got it all.
and yes, all that cruft did cause numerous problems. inevitable, really, with extra crap like partial packages, obsolete libs and binaries.
don't try this at home, it'll suck.
Too true Craig. I do my regular backup without the --delete option but then log what would have been deleted otherwise.
From time to time I follow that by viewing the log with the cache crap filtered out and when I am happy that the potential deletions are what I intended I run rsync using --delete to get rid of the nonsense.
Bob
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- The Bundys, Cliven, Ted and Al. Great guys to look up to.

On 17/12/15 20:00, Robert Parker via luv-main wrote:
Too true Craig.
I do my regular backup without the --delete option but then log what would have been deleted otherwise. From time to time I follow that by viewing the log with the cache crap filtered out and when I am happy that the potential deletions are what I intended I run rsync using --delete to get rid of the nonsense.
The problem with this approach is that if you need to do a full restore you will have to restore the cruft and have no way of distinguishing it from the non-cruft. These are the options I use: nice rsync -avgSH --delete --delete-excluded --ignore-errors --delete-after --exclude-from="/sbin/dzexlist" \ / "$dest_dir"bu | tee -a /var/log/dzbu/$logfname "$dest_dir"log/$logfname # copy changed files Anyone know where I can buy a >= 2TB disk in the Elwood area?

On Fri, 18 Dec 2015 08:58:56 PM David Zuccaro via luv-main wrote:
Anyone know where I can buy a >= 2TB disk in the Elwood area?
http://www.msy.com.au/stores MSY has a store in Malvern. MSY generally has low prices, good service, and a reasonable stock of everything that's common. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Fri, Dec 18, 2015 at 09:17:49PM +1100, Russell Coker via luv-main wrote:
On Fri, 18 Dec 2015 08:58:56 PM David Zuccaro via luv-main wrote:
Anyone know where I can buy a >= 2TB disk in the Elwood area?
MSY has a store in Malvern. MSY generally has low prices, good service, and a reasonable stock of everything that's common.
good service if what you want to do is just buy stuff with no fuss, and without needing any advice or to ask anything but the most basic questions (like "how much?"). they're great if you know what you want and want it at a good price, but IME they're not in the least bit focussed on answering questions. this is good for me because i'd rather not pay extra to have "service" staff on hand who know only a tiny fraction of what i know, or am capable of researching online, anyway. craig -- craig sanders <cas@taz.net.au>

On Fri, Dec 18, 2015 at 4:58 PM, David Zuccaro < david.zuccaro@optusnet.com.au> wrote:
On 17/12/15 20:00, Robert Parker via luv-main wrote:
Too true Craig.
I do my regular backup without the --delete option but then log what would have been deleted otherwise. From time to time I follow that by viewing the log with the cache crap filtered out and when I am happy that the potential deletions are what I intended I run rsync using --delete to get rid of the nonsense.
The problem with this approach is that if you need to do a full restore
you will have to restore the cruft and have no way of distinguishing it from the non-cruft.
I only consider stuff like browser cache stuff cruft for when I view my logs. I filter the stuff out when I view my logs so all I see is what I have deleted, not what the applications did. In the event of a restore I want my browser history back. If there is a bit of stuff there that should not have been because I have not used --delete in a few days, then the browsers will just delete them again. It's nothing I need to worry about. I do use an --exclude-file for stuff that I really don't want. Bob

On 18 Dec 2015 8:59 pm, "David Zuccaro via luv-main" <luv-main@luv.asn.au> wrote:
On 17/12/15 20:00, Robert Parker via luv-main wrote:
Too true Craig.
I do my regular backup without the --delete option but then log what
would have been deleted otherwise.
From time to time I follow that by viewing the log with the cache crap filtered out and when I am happy that the potential deletions are what I intended I run rsync using --delete to get rid of the nonsense.
The problem with this approach is that if you need to do a full restore you will have to restore the cruft and have no way of distinguishing it from the non-cruft.
These are the options I use:
nice rsync -avgSH --delete --delete-excluded --ignore-errors --delete-after --exclude-from="/sbin/dzexlist" \ / "$dest_dir"bu | tee -a /var/log/dzbu/$logfname "$dest_dir"log/$logfname # copy changed files
Anyone know where I can buy a >= 2TB disk in the Elwood ?
CentreCom at the bottom end of Glenhuntly Rd in Elsternwick is close.

David Zuccaro via luv-main <luv-main@luv.asn.au> writes:
nice rsync [...]
Suggest (from memory, untested): nice ionice -c3 chrt --idle 0 rsync --rsh='ssh -oIPQoS=throughput' --rsync-path='nice ionice -c3 chrt --idle 0 rsync' [...] These basically translate to "let me run backups during the day without $boss complaining that his cattube videos are slow".

On Thu, 17 Dec 2015 01:17:12 AM David Zuccaro via luv-main wrote:
Hi, I'm running fsck on my root partition and getting lots of io errors as shown in the link.
What should I do? Is my disk dead or can it still be used?
https://www.gnu.org/software/ddrescue/ It is dying. You want to get the data off ASAP. Use ddrescue to copy it to a disk of equal size or larger and then you can fsck. Don't run fsck when the disk is giving errors, that will probably make things worse. Keep the system reasonably cool and copy the data quickly. You don't know how long the disk will last before it totally fails. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

David Zuccaro via luv-main <luv-main@luv.asn.au> writes:
Hi, I'm running fsck on my root partition and getting lots of io errors as shown in the link.
What should I do? Is my disk dead or can it still be used?
https://www.dropbox.com/s/d3m019akv70xihs/2015-12-17%2000.50.59.jpg?dl=0
The "{ DRDY ERR }" means the OS can't talk to the disk. While this is *probably* the hard disk dying, it could also be a faulty cable or motherboard. I would * check backups. Others have already discussed this in this thread. * do smart self-tests "short" and then "long". If either fail, replace the disk. * move disk to new computer & do fsck &c; does the fault follow the disk? If not, check other components (e.g. cable). * consider RAID1.

Quoting Trent W. Buck via luv-main (luv-main@luv.asn.au): [much of the usual excellent advice, ending with:]
* consider RAID1.
A second matching mass-storage device is normally such cheap insurance that, on any host more substantive than a laptop, I would say _insist_ on RAID1ing any directory trees you care about, and just RAID1ing everything if you can reasonably do so.

On Mon, 21 Dec 2015 12:39:37 PM Rick Moen via luv-main wrote:
A second matching mass-storage device is normally such cheap insurance that, on any host more substantive than a laptop, I would say insist on RAID1ing any directory trees you care about, and just RAID1ing everything if you can reasonably do so.
RAID-1 or ZFS with copies=2 on a laptop will save you from many situations of data loss or corruption. I have my clients give me SATA disks that have bad sectors and are therefore unsuitable for use in servers. I make them BTRFS RAID-1 and they work fine for me as backup disks. Most such disks only ever have about 50 bad sectors and RAID-1 covers that nicely. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/
participants (11)
-
Adam Bolte
-
Chris Samuel
-
Colin Fee
-
Craig Sanders
-
David Zuccaro
-
Erik Christiansen
-
Joel W. Shea
-
Rick Moen
-
Robert Parker
-
Russell Coker
-
trentbuck@gmail.com