
-----Original Message----- From: Erik Christiansen via luv-main <luv-main@luv.asn.au> To: luv-main@luv.asn.au Sent: Thu, 17 Dec 2015 11:06 pm Subject: Backups with rsync [Was: Is my root partition dying?]
On 17.12.15 21:33, Russell Coker via luv-main wrote:
There are a variety of backup systems that start with rsync and manage trees of links. It's not difficult to write your own, rsync the files, run "cp -al" to make a copy with hard links and use today's date in the directory name, and then delete backup directories that are too old.
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just include -aH in my rsync options, the -H to preserve hard links. Seems to work.
It's almost enough to make one wonder why it's a little bit fiddly to make rsync "just shuddup & copy the listed bits of the filesystem _as_is_, so they can be restored unaltered."
What I'm doing for backups at the moment is to rsync to files on a BTRFS filesystem and then create a snapshot. If I want to retrieve a file that was deleted then I just copy it from a suitably old snapshot.
A bit more basic here. Just a rsync -Hauv to one of several flash sticks, then a diff -qr to confirm the copy's OK, and show any deletions which should be done, usually 0-3 per backup. But then my precious data is diminutive by most standards. Most important is that they come with me off-site.
rsnapshot, rdiff-backup and duplicity are all good alternatives, with different advantages. The approaches being discussed here tend to approximate to rsnapshot. In fact there's a succinct description of what rsnapshot does here: http://serverfault.com/questions/136861/which-is-best-for-backups-rsync-vs-r... . Basically there's no reason why you can't do it as well yourself, but there's probably a higher chance of making a mistake. Where rdiff-backup comes into its own is when a file changes frequently. E.g. The one where you keep the latest dump of your database. With a hard link based approach you wind up keeping a full copy of each day's dump, even though the changes might be small. With rdiff-backup you only have to store a compressed diff each day. Duplicity is less efficient, but it handles keeping everything on the remote system encrypted, and can back up to remote systems where you can't run any sort of remote software agent (like rsync). So you can back up to FTP or Amazon's S3. The cost of traffic has come down enough to make backing up to s3 quite reasonable, so long as you can tolerate a long recovery time. I also find backup-ninja useful for scheduling backups. Andrew