
On Thu, May 24, 2018 at 03:38:15PM +1000, Paul van den Bergen wrote:
I currently take the approach that unless I have specific IO needs for a volume, I will work with one partition for OS and data as it is the most efficient use of disk space.
This is true for standard partitioning, or LVM logical volumes. It's not true for either btrfs or zfs. Disk usage efficiency for them is completely unaffected by using multiple sub-volumes/datasets. With partitions or LVs you have to decide how big you want them at the time you create them. Changing their size is a moderately complicated task - not terribly difficult once you know how to do it, but it does require care and attention to detail to ensure you don't screw it up. and depending on the filesystem the partition or LV is formatted with, you may be restricted to only growing the partition, never shrinking it (which makes, e.g., shrinking /home to grow / even more of a PITA) With sub-volumes on btrfs and datasets on zfs, they just share space on the entire pool. Unless you set entirely optional quotas or reservations, you will never have to resize anything. And if you do set a quota or reservation, it's trivially easy and risk-free to change them at any time...they're "soft" limits, not hard. I used to do one big partition for everything - same as you, for the same reason. Now I use zfs datasets so that I can enable different attributes (like compression type, acl types, quotas, recordsize, etc) for specific needs - e.g. mysql and postgres perform better if their files are stored on a dataset where the recordsize is 8K rather than the ZFS default of 128K. And systemd's journald complains if it can't use posix acls, so I'm getting into the habit of setting 'acltype=posixacl' and 'xattr=sa' on /var/log for my zfs machines. And using gzip rather than lz4 for /var/log too. Videos, music, and deb files are already compressed so their datasets have 'compression=off'. Having /home and /var and other directories separated from / is useful - but in the old days of fixed partition sizes, it just wasn't worth the hassle or the risk of running out of space on one partition while there's plenty available on other partitions. Now it's no hassle or risk at all.
synology takes the first slice (~2-3GB) of every disk in the device and makes a RAID 1 volume for the operating system, then does the same with the second slice to make a swap partition. You can lose all but one disk and still have a bootable working machine. the rest of the disk is available to make volumes out of.
yep, this is a good idea. it's similar to what I do on all my machines. craig -- craig sanders <cas@taz.net.au>

Is zfs in kernel space yet? or still user land only? I'd definitely use zfs in BSD or solaris without hesitation over LVM. Not sure about Mac - not familiar with the native FS for that space at all - though I have no doubt one could install zfs without too much hassle. On 24 May 2018 at 16:48, Craig Sanders via luv-main <luv-main@luv.asn.au> wrote:
On Thu, May 24, 2018 at 03:38:15PM +1000, Paul van den Bergen wrote:
I currently take the approach that unless I have specific IO needs for a volume, I will work with one partition for OS and data as it is the most efficient use of disk space.
This is true for standard partitioning, or LVM logical volumes. It's not true for either btrfs or zfs. Disk usage efficiency for them is completely unaffected by using multiple sub-volumes/datasets.
With partitions or LVs you have to decide how big you want them at the time you create them. Changing their size is a moderately complicated task - not terribly difficult once you know how to do it, but it does require care and attention to detail to ensure you don't screw it up. and depending on the filesystem the partition or LV is formatted with, you may be restricted to only growing the partition, never shrinking it (which makes, e.g., shrinking /home to grow / even more of a PITA)
With sub-volumes on btrfs and datasets on zfs, they just share space on the entire pool. Unless you set entirely optional quotas or reservations, you will never have to resize anything. And if you do set a quota or reservation, it's trivially easy and risk-free to change them at any time...they're "soft" limits, not hard.
I used to do one big partition for everything - same as you, for the same reason. Now I use zfs datasets so that I can enable different attributes (like compression type, acl types, quotas, recordsize, etc) for specific needs - e.g. mysql and postgres perform better if their files are stored on a dataset where the recordsize is 8K rather than the ZFS default of 128K. And systemd's journald complains if it can't use posix acls, so I'm getting into the habit of setting 'acltype=posixacl' and 'xattr=sa' on /var/log for my zfs machines. And using gzip rather than lz4 for /var/log too. Videos, music, and deb files are already compressed so their datasets have 'compression=off'.
Having /home and /var and other directories separated from / is useful - but in the old days of fixed partition sizes, it just wasn't worth the hassle or the risk of running out of space on one partition while there's plenty available on other partitions. Now it's no hassle or risk at all.
synology takes the first slice (~2-3GB) of every disk in the device and makes a RAID 1 volume for the operating system, then does the same with the second slice to make a swap partition. You can lose all but one disk and still have a bootable working machine. the rest of the disk is available to make volumes out of.
yep, this is a good idea. it's similar to what I do on all my machines.
craig
-- craig sanders <cas@taz.net.au> _______________________________________________ luv-main mailing list luv-main@luv.asn.au https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main
-- Dr Paul van den Bergen

On Thu, May 24, 2018 at 05:51:11PM +1000, Paul van den Bergen wrote:
Is zfs in kernel space yet? or still user land only?
Kernel. I don't think anyone has used the ZFS FUSE module for anything real for years - zfsonlinux (ZoL) has been around since 2008. I've been using it since 2010. http://zfsonlinux.org/ The one minor hassle is that it isn't, and probably never will be(*) in the mainline linux kernel. That means you have to compile and install the kernel module. This is terribly, terribly difficult: apt-get install zfs-dkms Actually, the full set of packages you'd want to install on debian or ubuntu is: apt-get install zfs-initramfs zfs-zed zfsnap zfsutils-linux spl spl-dkms zfs-dkms * spl & spl-dkms are the Solaris Porting Layer to enable zfs-dkms to compile and link and work with the linux kernel.. * zfsutils-linux contains the zpool, zfs, etc commands. * zfs-initramfs adds support for zfs to the initramfs to enable pools to be imported in the initrd. also enables rootgs on zfs. alternatively, use * zfs-dracut instead of zfs-initramfs if you use dracut. * zfs-zed monitors zfs/zpool events and emails you an alert if there's any problem. * zfsnap is a nice, very flexible snapshot scheduling program (e.g. automated creation and deletion of hourly, daily, weekly, monthly etc snapshots). Works well with the simplesnap package for backing up snapshots to another pool (on the same machine, or over the network to another machine running zfs). BTW, I would strongly advise using 'apt-mark hold' to put the zfs packages AND your linux-image-* and linux-headers-* on hold. spl & zfs often need to be tweaked for new kernel releases, and it's fairly common for there to be a few days or even weeks between the time a new kernel version is packaged for debian and the spl-dkms & zfs-dkms packages are updated to match. Also, I don't think it's a good idea to just upgrade the zfs modules along with any regular 'apt-get upgrades'...IMO, something critically important like a filesystem upgrade should only be done when you need/want to upgrade it. Unhold them when you're ready to upgrade, perform the upgrade, then hold the packages again. I do the same thing with the proprietary nvidia driver nvidia-kernel-dkms. Odd things can happen to X when X is running on one version of nvidia.ko, but the underlying module has been upgraded. By "odd" I mean that X will still continue running but some programs (e.g. the linux steam client) will refuse to start until you reboot with the new driver. Mostly I just don't want to have to close down all my browser windows and terminals and tmux sessions etc and reboot until I'm ready to do so. There are a few other packages I also hold so that they only upgrade when I want them to. Postgresql, for example. and Firefox and Chromium. (*) Unless Oracle re-licenses(**) it as BSD or something else that's GPL compatible - which seems very unlikely. (**) ZFS' license is Sun's CDDL. This is a free license by any definition, including the FSF's. It just happens to be incompatible with the GPL, so you can't distribute binaries containing both GPL & CDDL code. There's No problem with compiling or using such combined binaries yourself, though. And there's no problem with distributing helper scripts that automate the process of compiling or linking such binaries (like the zfs-dkms package).
I'd definitely use zfs in BSD or solaris without hesitation over LVM.
There's no reason not to do so in Linux, either.
Not sure about Mac - not familiar with the native FS for that space at all - though I have no doubt one could install zfs without too much hassle.
Dunno either. I heard that Apple were going to switch to ZFS or at least offer it as an option but then decided not to. That was quite a few years ago now and is only a vague half-memory. I think there's at least one open source project bringing openzfs to the mac, similar to how zfsonlinux brings openzfs to linux....but I don't really care about macs so don't pay much attention to apple or mac news. craig -- craig sanders <cas@taz.net.au>
participants (2)
-
Craig Sanders
-
Paul van den Bergen