
Does it make sense to compress log files nowadays?
BTRFS and ZFS have compression built in, that compression probably won't be as good as gzip on a large file because the filesystems will use smaller block sizes. But it will still provide some benefit.
On modern systems writes are more of an issue. Rewriting every log file every night contributes to the wear of SSDs for no real benefit (any system which has log files any fraction of the size of a 60G minimal SSD will probably use ZFS or BTRFS on some big disks for main storage).
It seems to me that the default log compression has more problems than benefits with today's technology. It made a lot of sense when 1G was a big hard drive, but nowadays a 60G SSD is about as small as you can buy.
Consider virtual hosting where disk size matters a bit more, because you pay for what you use. To take an example of one of my spam filtering mail relays which sees around 500MB of email a day, and on which I keep 365 days of logs, the logfiles take 380MB compressed but would take 2.3GB uncompressed. If they weren't compressed I'd have to buy more disk than the 10GB I have now. If you're talking desktop systems then I'm not the right person to answer, but around 90% of the Linux servers I maintain are virtual and with only as much disk as they need. I like to keep logfiles for more (sometimes much more) time than the default, so compression makes sense for me. Maybe a Linux desktop which rotates it's logfiles off in a short time and doesn't accumulate much in them anyway would be better configured without logfile compression? I can't see that logfiles would contribute much to overall SSD wear on such a system though. How to SSD's fare with continually appended data anyway? I can't imagine that it would be that kind to the media. Do any filesystems allow deferring the write until you hit an erase-sized block, without deferring more important data? Obviously you risk losing data in the event of a crash, and this is precisely the time that logs are useful :) James