On 17/12/15 23:06, Erik Christiansen via luv-main wrote:
On 17.12.15 21:33, Russell Coker via luv-main wrote:
There are a variety of backup systems that start
with rsync and manage
trees of links. It's not difficult to write your own, rsync the
files, run "cp -al" to make a copy with hard links and use today's
date in the directory name, and then delete backup directories that
are too old.
I'm not grokking the benefit of doing the rsync _and_ a "cp -al". I just
include -aH in my rsync options, the -H to preserve hard links. Seems to
work.
At my workplace, we use this rsync option:
--link-dest=DIR hardlink to files in DIR when unchanged
where DIR would be the most recent successful backup directory.
I wrote aboute 500 lines of Bash for my workplace years ago which is
basically an rsync wrapper that also reads hosts and directories to
backup from a config file, and does some basic directory management to
make it easy to browse backups and see what you are restoring.
Today after reading this thread, I asked and was granted permission to
make the git repo public. It's a bit hackish, but it has served us
well and might be worth a look.
https://github.com/sitepoint/abs
Since it backs up hosts on the network using rsync over SSH via pubkey
authentication, we use the rrsync wrapper script (typically installed
at /usr/share/doc/rsync/scripts/rrsync.gz in the rsync package), with
authorized_keys files like:
command="/usr/local/sbin/rrsync -ro /" ssh-rsa <key> <host comment>
and
PermitRootLogin forced-commands-only
in the /etc/ssh/sshd_config option on targeted hosts to improve
security.
Cheers,
Adam