
On Mon, Aug 08, 2016 at 02:05:47PM -0400, Robin Humble wrote:
I've 8G ram which should be heaps. limiting l2arc to 1G didn't help either.
l2arc or arc? if l2arc, 1G isn't really worth bothering with and may even hurt performance.
actually, come to think of it, I could get logs out if ZFS locks up via rsyslog to something lan/cloudy. I'll try that next time.
yep, that or a serial console should do it. unless the kernel locks up completely before it can get out a log packet or output to tty :(
BTW does btrfs still have issues when the filesystem fills? does ZFS?
In my experience, ZFS performance starts to suck when you get over 80% full. and really sucks when you get to 90+%. don't do that. that was on raidz (which inspired converting my backup pool from 4x1TB RAIDZ to to 4x4TB mirrored pairs). Haven't yet got to 80+% full with zfs mirrors, so don't know if that is as bad.
on my intel SSD, ZFS is noticably slower than ext4. part of it's because of ZFS's poor integration with linux's virtual memory system and both sets of caches clearly fighting each other,
have you tried setting zfs_arc_min = zfs_arc_max? that should stop ARC from releasing memory for linux buffers to use.
but presumably it's slower 'cos it has more features (eg. checksum, compression) too.
compression usually speeds up disk access. The most likely cause is that ext4 has excellent SSD TRIM support but ZFS on linux doesn't. There's a patch that's (finally!) going to be merged "soon": https://github.com/zfsonlinux/zfs/pull/3656 there used to be another ZoL TRIM patch some time ago but it was scrapped to avoid duplication of effort with illumos/freebsd. interestingly, there's also a pull request for compressing arc and l2arc: https://github.com/zfsonlinux/zfs/pull/4768 depending on data compressibility, that should help a lot on low memory systems. craig -- craig sanders <cas@taz.net.au>