
Russell Coker <russell@coker.com.au> wrote:
On Thu, 23 May 2013, Robin Humble <rjh+luv@cita.utoronto.ca> wrote:
nothing clever, just manually. overly manually. every week or so I power on the backup box and run some rsyncs. it'd be possible to automate it (script that runs on power-up) but I haven't bothered.
I have a similar strategy. The backup drive is actually a Btrfs file system created quite a long time ago when I thought Btrfsck (with the ability to correct errors) was just around the next corner on the development roadmap, an unduly optimistic assumption, as it turned out. In my partial defence, I was interested in the check sums, possibly also the snapshots, and I've always run btrfsck to check the integrity of the file system after unmounting it following a backup - so if there's a major error I should find out about if immediately after rsync and unmount have completed, not at restore time when it really matters. It is in fact a rather old BTRFS file system now; I can't remember the creation date though. When was the last on-disk format change requiring re-creation of file systems?
I'm currently using BTRFS snapshots for that sort of thing. On some of my systems I have 100 snapshots stored from 15 minute intervals and another 50 or so stored from daily intervals. The 15 minute intervals capture the most likely possibilities for creating and accidentally deleting a file. The daily once cover more long-term mistakes.
That's a convenient and thorough solution. In any directory containing work that I edit and want to keep, I maintain a Git repository. Running git init is easy enough and all that is required to maintain a reasonable history is a little discipline. I also use etckeeper for the same purpose (again with Git as the underlying version control tool). For especially important files (e.g., my PhD thesis and papers that I've written), I can push the repository to a remote machine owned by a trustworthy friend.