
On Sat, 13 Jul 2013, Craig Sanders <cas@taz.net.au> wrote:
Firstly plan what you are doing especially regarding boot.
Do you want to have /boot be a RAID-1 across all 4 of the disks?
not a good idea with ZFS. Don't give partitions to ZFS, give it entire disks.
from the zfsonlinux faq:
http://zfsonlinux.org/faq.html#PerformanceConsideration
"Create your pool using whole disks: When running zpool create use whole disk names. This will allow ZFS to automatically partition the disk to ensure correct alignment. It will also improve interoperability with other ZFS implementations which honor the wholedisk property."
Who's going to transfer a zpool of disks from a Linux box to a *BSD or Solaris system? Almost no-one. If Solaris is even a consideration then just use it right from the start, ZFS is going to work better on Solaris anyway. If you have other reasons for choosing the OS (such as Linux being better for pretty much everything other than ZFS) then you're probably not going to change.
Don't bother with ZIL or L2ARC, most home use has no need for more performance than modern hard drives can provide and it's best to avoid the complexity.
if he's got an SSD then partitioning it to give some L2ARC and ZIL is easy enough, and both of them will provide noticable benefits even for home use.
it takes a little bit of advance planning to set up (i.e. partitioning the SSD and then issuing 'zfs add' commands, but it's set-and-forget. doesn't add any ongoing maintainence complexity.
But it does involve more data transfer. Modern SSDs shouldn't wear out, but I'm not so keen on testing that theory. For a system with a single SSD you will probably have something important on it. Using it for nothing but ZIL/L2ARC might be a good option, but also using it for boot probably wouldn't.
On Fri, 12 Jul 2013, Craig Sanders <cas@taz.net.au> wrote:
0. zfsonlinux is pretty easy to work with, easy to learn and to use.
Actually it's a massive PITA compared to every filesystem that most Linux users have ever used.
yeah, well, it's a bit more complicated than mkfs. but a *lot* less complicated than mdadm and lvm.
I doubt that claim. It's very difficult to compare complexity, but the layered design of mdadm and lvm makes it easier to determine what's going on IMHO.
and gaining the benefits of sub-volumes or logical volumes of any kind is going to add some management complexity whether you use btrfs(8), zfs(8), or (worse) lvcreate/lvextend/lvresize/lvwhatever.
It's true that some degree of complexity is inherent in solving more complex problems. That doesn't change the fact that it's difficult to work with. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/