
[1] re: de-duping - not worth the bother, imo. with the amount of ram
needed for it to be viable (estimates range from 1GB to 5GB RAM per TB of disk, just for the L2ARC), it's cheaper just to have more & bigger disks.
I'm actually about to give that a test, enabling it for a specific backup section of my fs, which may have a few dupes. Just want to see exactly how much RAM it will now use, and how much the dedup helps. I'm doing this by creating a new fs for this section, moving the files into there, and then turning dedup on for that FS. Presume this is the proper method. That section is around 400GB. The machine in question currently has 16GB of RAM, so it'll be interesting to see how things go with it on.
zfs compression, OTOH, is definitely worth-while if most of the data you're storing is not already compressed.
Again, this will be limited to certain directories. Is the best practice to create separate filesystems for those areas? / Brett