
On Wed, 17 Oct 2012, Craig Sanders wrote:
From the guest VM's point-of-view, it's just a disk with nothing special about it.
ext3 or ext4 performance in the guest will be similar to performance if the guest were given an LVM lv.
I haven't done any benchmarking to compare zvol with lv (mostly because and I can't afford to add 4 drives to my ZFS server just to test LVM lv vs ZFS zvol performance), but I can give a subjective anecdote that the performance improvement from using a ZFS zvol instead of a qcow2 disk image is about the same as using an LVM lv instead of a qcow2 file.
i.e. *much* faster.
if i had to guess, i'd say that there are probably some cases where LVM (with its nearly direct raw access to the underlying disks) would be faster than ZFS zvols but in most cases, ZFS' caching, compression, COW and so on would give the performance advantage to ZFS.
Just make sure the sum of the zvols remains below 80% of the total disk usage, I guess. ext4 + lvm can effectively use more than 99% of disk space (I've done it for years), but the moment you try to do lots of rewrites on a device on zfs, the lack of a free-space-cache that btrfs has means that the highly fragmented nature of the remaining 20% of space makes zfs completely unusable the first time you try it (multi-minute pauses and 250kB/s write rates vs 100MB/s. I quickly bought new disks). I'm actually regretting my move to zfs because of it - I can hardly afford to repeat the month it took to rsync the backuppc pool to zfs in the first place. If ext4+md aint broke, don't fix it. Sure it sucked, but the alternatives suck harder. The 200 or so VMs at work are split across datastores many of which are 99% full, and they're still surprisingly healthy. I'm guess the SAN is *not* zfs based. -- Tim Connors