
Hi Aryan I've been using rsync to migrate whole hard drives for years. In my option its the only way to do it. rsync -aPSvx --numeric-ids --delete <source-directory>/ <destination-host>:<destination-directory>/ If you want compression you can add a 'z' ssh between machines used if you add the destination host. The '/' are very important or you will end up deleting the wrong thing when you run rsync multiply times. I use vservers (www.linux-vservers.org) which means I copy the files a few times then shutdown the virtual guest and run rsync once then start it up on the new machine. Cheers Mike On 26/03/13 6:14 AM, Aryan Ameri wrote:
What's the best way to copy a large directory tree (around 3TB in total) with a combination of large and small files? The files currently reside on my NAS which is on my LAN (connected via gigabit ethernet) and are mounted on my system as a NFS share. I would like to copy all files/directories to an external hard disk connected via USB.
I care about speed, but I also care about reliability, making sure that every file is copied, that all metadata is preserved and that errors are handled gracefully. I've done some research and currently, I am thinking of using tar or rsync, or a combination of the two. Something like:
tar --ignore-failed-read -C $SRC -cpf - . | tar --ignore-failed-read -C $DEST -xpvf -
to copy everything initially, and then
rsync -ahSD --ignore-errors --force --delete --stats $SRC/ $DIR/
To check everything with rsync.
What do you guys think about this? Am I missing something? Are there better tools for this? Or other useful options for tar and rsync that I am missing?
Cheers
-- Aryan _______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main