
I'd say don't. It's nowhere near ready for production yet. Don't plan on doing anything else with the server. zfsonlinux still is not integrated to page cache and linux memory management (due for 0.7 maybe if we're lucky, but I see no one working on the code). It does fragment memory, and even if you limit it to half your RAM of even far smaller, it will still use more than you allow it to, and crash your machine after a couple of weeks of uptime (mind you, it's never lost data for me, and now that I have a hardware watchdog, I just put up with losing my NFS shares for 10s of minutes every few weeks as it detects a softlockup and reboots). It also goes on frequent go-slows, stopworks and strikes, before marching down Spring St (seriously, it goes into complete meltdown above about 92% disk usage, but even when not nearly full, will frequently get bogged down so much that NFS autofs mounts time out before succesfully mounting a share). rsync, hardlink and metadata heavy workloads are particularly bad for it. snapshots don't give you anything at all over LVM, and just introduces a different set of commands. zfs send/recv is seriously overrated compared to dd (the data is fragmented enough on read, because of COW, that incremental sends are frequently as slow as just sending the whole device in the first place). raidz is seriously inflexible compared to mdadm. Doesn't yet balance IO to different speed devices, but a patch has just been committed to HEAD (so might make it out to 0.6.2 - but I haven't yet tested it and have serious doubts). Think of zvols as lvm logical volumes. With different syntax (and invented by people that have never heard of LVM). You can't yet swap to zvols without having hard lockups every day or so (after dipping into about 200MB of a swap device), whereas I've never ever had swapping to lvm cause a problem (and we have that set up for hundreds of VMs here at $ORK). In short, overrated. I wouldn't do it again if I didn't already have a seriously large (for the hardware) amount of files and hardlinks that take about a month to migrate over to a new filesystem. ext4 on bcache on lvm (or vice versa) on mdadm sounds like a much more viable way to go in the future. bcache isn't mature yet, but I don't think it will be long given the simplicity of it, before it will be. Separation of mature layers is good. Whereas zfs is a massive spaghetti of new code that hooks into too many layers and takes half an hour to build on my machine (but at least it's now integrated into dkms on debian). On Fri, 12 Jul 2013, Kevin wrote:
Make sure you have lots of ram.
If you are using raidz or raidz2 you will need to ensure your zdevs are designed from the start as they cannot change in the future. Pools are cool snapshots are good
checkout btrfs.
On Fri, Jul 12, 2013 at 2:30 PM, Colin Fee <tfeccles@gmail.com> wrote:
On 12 July 2013 12:06, Colin Fee <tfeccles@gmail.com> wrote:
Last night after some updates on my media server, one of the disks in a mirror set failed a SMART check and was kicked out (fortunately I have good backups). It's failing on the bad sector count but I haven't done a deeper analysis yet.
However it's time to bite the bullet and do some upgrade work. I'm intending to replace all of the disks with larger ones to increase the storage size and when I do, use ZFS.
The box is based around a Gigabyte GA-880GM-USB3 mobo, with an AMD Phenom II X6 1055T cpu and buckets of RAM and the OS on a 240Gb SSD.
So I'm looking for a strategy re the implementation of ZFS. I can install up to 4 SATA disks onto the mobo (5 in total with one slot used by the SSD)
I should add to that I've begun reading through necessary literature and sites, and a colleague at work who uses ZFS extensively on his home Mac server stuff has given me some intro videos to watch.
I guess what I'm asking is for pointers to the gotchas that most people overlook or those gems that people have gleaned from experience. -- Colin Fee tfeccles@gmail.com
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- Tim Connors