
On Tue, 16 Jul 2013, Craig Sanders wrote:
On Mon, Jul 15, 2013 at 03:33:09PM +1000, Tim Connors wrote:
I find zfs obtuse, awkward, inflexible, prone to failure and unreliable.
you must be using a different zfs than the one i'm using because that description is the exact opposite in every respect of my experience with zfs.
I've got one for you; just came in on the mailing list (someone got stale mounts when their kernel nfs server restarted): ZFS and the NFS kernel server are not that tightly integrated. When you do a 'zfs set sharenfs="foo,bar" pool/vol' of a 'zfs share pool/vol', the zfs tools just give a call to the NFS kernel server saying 'Hey I want you to share this over NFS'. If the NFS kernel server is restarted it unshares everything and only reads back whatever is in /etc/exports. This is actually expected as NFS doesn't know anything about ZFS. Doing a 'zfs share -a' exports all your NFS/SMB shares again. When you don't use your system native tools, or when someone tries to solve something at the wrong layer (zfs trying to mess around with NFS? That's almost as bad as the GUI at my previous workplace that tried to keep track of the state of something and then changing that state through a different layer to usual and then remembering the old state instead of directly querying it), you've got to expect problems. This particular problem I get around by just ignoring ZFS's NFS settings. I have no idea what value they're meant to add.
One day when I got sick of the linux kernel's braindead virtual memory management, I tried to install debian kfreebsd, but gave up before I finished installation because not having lvm seemed so primitive. I was probably just trying to use the wrong tool for the job.
probably. the freebsd kernel has zfs built in, so zfs would be right tool there.
Not when I tried it, mind you.
Does anyone use zfs's dedup in practice? Completely useless.
yes, people do. it's very heavily used on virtualisation servers, where there are numerous almost-identical copies of the same VM with minor variations.
Even then, for it to be worthwhile, when you have 800TB of VMs deployed, you can't easily dedup that (although from memory, a TB of RAM only costs about $100K). For any VMs I've ever seen, the identical shared data isn't all that much (our templates are 40GB in size) compared to the half TB deployed on average per VM. Hardly seems worth all the pain.
it's also useful on backup servers where you end up with dozens or hundreds of copies of the same files (esp. if you're backing up entire systems, including OS)
On my home backup server, the backup software dedups at the file level (but shared between VMs - achieved by hardlinking all files detected to be the same, comparing actual content rather than hash collisions). It does a very good job according to its own stats. Block level dedup is a bit overkill except if you're backing up raw VM snapshot images.
yep, that fits my usage pattern too...i don't have that much duplicate data. i'm probably wasting less than 200GB or so on backups of linux systems in my backup pool including snapshots, so it's not worth it to me to enable de-duping.
other people have different requirements.
I acknowledge that there are some uninteresting systems out there that are massively duplicated SOEs with bugger-all storage. Might fit that pattern. And yet I believe our VDI appliances that they're trying to roll out at work *still* won't be backed by ZFS with dedup.
i've found 16GB more than adequate to run 2 4TB zpools, a normal desktop with-the-lot (including firefox with dozens of windows and hundreds of tabs open at the same time) and numerous daemons (squid, apache, named, samba, and many others).
of course, a desktop system is a lot easier to upgrade than a laptop if and when 32GB becomes both affordable and essential.
Unfortunately, little Atom NASes seem to max out at 4GB.
Au contraire. If you use lvresize habitually, one day you're going to accidentally shrink your LV instead of expand it, and the filesystem below it will then start accessing beyond end of device, with predictably catastrophic results. Use lvextend prior to resize2fs, and resize2fs shrink prior to lvreduce, and you'll be right.
the risk of typing '-' rather than '+' does not scare me all that much.
Like Matthew said, the issue is when you provide absolute size, and might get the units wrong. Woops, just shrank it to 1000th of its original size!
i tend to check and double-check potentially dangerous command lines before i hit enter, anyway.
No <up>-<up>-<enter> sudo reboots? ;P -- Tim Connors