
On Sat, 25 May 2013, Craig Sanders <cas@taz.net.au> wrote:
e.g. i have the following /etc/modprobe.d/zfs.conf file on my 16GB system....it's a desktop workstation as well as a ZFS fileserver, so I need to limit how much RAM zfs takes.
# use minimum 1GB and maxmum of 4GB RAM for ZFS ARC options zfs zfs_arc_min=1073741824 zfs_arc_max=4294967296
options zfs zfs_arc_max=536870912 I just checked the system in question, it still has the above in the modules configuration from my last tests. "free" reports that 9G of RAM is used as cache so things seem to be getting cached anyway.
does btrfs use significantly less RAM than zfs? i suppose it would, as it uses the linux cache whereas ZFS has its separate ARC.
Yes. On any sort of modern system you won't notice a memory use impact of it. One Xen DomU has 192M of RAM assigned to it and BTRFS memory use isn't a problem. The system in question doesn't have serious load (it's used as a box I can ssh to to test other systems and for occasional OpenVPN use) and it could be that it gives less performance because of BTRFS. But the fact that it works at all sets it apart from ZFS. Also note that dpkg calls sync() a lot and thus gives poor performance when installing packages on BTRFS. As an aside, I'm proud to have filed the bug report against dpkg which led to this.
I've got a Xen server that uses ZVols for the DomU block devices. I've been wondering if it really gives a benefit.
in my experience, qcow2 files are slow. and especially slow over NFS.
I'm using a "file:" target in Xen for swap on some DomUs. That hasn't been a problem but then I have enough RAM to not swap much.
if shared storage for live migration isn't important, it would be worthwhile doing some benchmarking of zvol vs qcow on zfs vs qcow on btrfs.
Yes, that sounds like a good idea. I recently got a quad-core system with 8G of RAM from e-waste so I should do some benchmarks on such things.
i was speculating that in the case of an mdadm raid array of iscsi zvols, it's possible the snapshots of the zvols on different servers could be different - it would be almost impossible to guarantee that the snapshots would run at exactly the same time.
whether that's actually important or not, I don't know - but it doesn't sound like a desirable thing to happen.
RAID arrays are designed to be able to handle a device dropping out. If you are going to have members of the RAID array on different systems then by design you have a much greater risk of this than usual. If snapshots HAVE to be run at the same time then you'll probably have other problems. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/