
Hi Russell, http://en.wikipedia.org/wiki/ZFS#Storage_pools." "ZFS filesystems are built on top of virtual storage pools called zpools." The management commands are zfs and zpool. The man pages don't have any "zvols" in it but Wikipedia's article has. Ah, http://www.freebsd.org/doc/en/books/arch-handbook/driverbasics-block.html "Block Devices (Are Gone)" ;-) Okay, "your" zvols are the block devices related to a zfs under Linux. I don't have them.. That's changing the picture slightly. On Wed, 17 Oct 2012, Russell Coker wrote:
The disks are files on a "normal" zfs so they profit from snapshoting, zfs send/receive mechanism for off-site backup etc.
So you can't do that with a zvol?
The commands are on zfs level (zfs snapshot, zfs send, zfs receive) and are not available via zpool commands. To zvol..
You may increase performance if you use "raw zpool" underneath but then you don't have the "cool stuff" (snapshots, cloning etc.) that wants you to use ZFS in the first place.
What is a "raw zpool"? Is that a zvol?
Reading http://zfsonlinux.org/example-zvol.html Never tried (or even thought of trying) to partition "something" created with "zfs create" under FreeBSD.. Don't think I can do that - I don't have block devices.. Anyway, would a zvol be significantly better than a file in the zfs as I do now? I actually thought of giving a dedicated zpool to the guest.
But I don't think it is a really good setup, it is just "good enough" here, and as I have all other stuff running in jails native on FreeBSD, I keep it.
In what way isn't it "really good"?
You mention them: inside the VirtualBox I don't have contiuous reads/writes etc. It is layering for the convenience of easy administration - not for high performance. But it is just one VirtualBox - all other services are running in jails. I am using zfs snapshots/send/receive to mirror all services on other boxes so a machine failing is not the end of the world (also: "good enough" here - it does not happen frequently, in fact it did not happen over the two years I am using this setup, and the business could tolerate one day's data loss - and what others do.. see the cloud disasters over the last two years;-) Not having to deal with distinguished layers (e.g. LVM and ZFS) has the advantage of not having to maintain two sets of administration tools.
I could imagine using LVM on Dom0 and giving partitions to the DomUs and running ZFS inside.
That means you lose the contiguous write feature of ZFS which is essential to good performance. Ext3/4 on LVM volumes gives somewhat contiguous reads where possible, ZFS when it owns the disks gives contiguous writes, but ZFS on multiple LVM volumes gives neither.
Agreed. Easy administration vs. performance.
That way you can snapshot the partitions with LVM outside (to get "disk images") and ZFS management inside.
Why would you want to do that?
As ZFS owns the devices and the mount points it's surely not going to be easy to have multiple snapshots of a ZFS filesystem active at once. It would probably be like trying to take a snapshot of a PV that's used for LVM - something that can theoretically be usable if you take the snapshot to another system but otherwise will be a massive PITA and probably cause data loss.
I don't understand what you mean here. Outside you have LVM and can do snapshots - if you force writing to ZFS in side beforehand, and suspend the guest for the snapshot (it does not take that long, and for a mail server, e.g. it is acceptable). Inside you can do snapshots that don't have to know whether the data is written physically, as long as the data is written to the virtual disk inside the guest system. Regards Peter