Re: ZFS compression - looking for space savings

Hi Toby, thanks for the answer.
So.... what's stopping you doing a quick test to find out what is best for you?
I have the appliance not available yet but I will get there next week. Actually I forgot one condition.. the 1 GB RAM. I installed PC-BSD on a laptop with 1GB of RAM before (with ZFS only) and realized that it works in general (with Flukebox, no Gnome/KDE..) So I hope this works for this appliance too (no GUI needed here) I can make a test on a 32 GB RAM, 2 Xeon processor machine but this is a bit of limited value, I think. [For historical reasons - this problem is solved] The other issue was a "start". ZFS is only compressing new data. So I do not have an issue to send data from incremental snapshots from an uncompressed filesystem to a compressed one. But if I have a few hundred GB already? The "zfs receive" will create the ZFS.. I figured out a way, you can create the parent filesystem with compression on and the new one inherits this value:-) I may need some smarts in a script to get the path names right.
I ran a quick test using (non-zfs) equivalents of various compression tools, over a 2.0G filesystem image. (ie. hoping that represents a fair variety of binary+text files) lz4 1.7s 221M lz4 -6 10s 189M gzip 25s 151M bzip2 54s 135M 7z 147s 102M xz 253s 103M
Thanks for it. I will see whether it matches the ZFS compression.
So if space is really at a premium, you're better off using an archive tool to compress everything, rather than zfs' built-in compression.
The fun comes if you can keep a bunch of older copies thanks to snapshots. At the moment the backup is keeping six weekly backups online (compressed full backups) plus a year worth of quarterly backups. I hope to push it further back with snapshots.
But otherwise, gzip is better than lz4, but at significantly slower performance.
Agreed and thanks Peter
participants (1)
-
Peter Ross