
On Thu, Oct 06, 2011 at 11:31:34PM +1100, Craig Sanders wrote:
On Thu, Oct 06, 2011 at 10:26:40PM +1100, Russell Coker wrote:
Why would there be any connection between VM technology and filesystem unless you use VM specific COW systems?
There isn't, particularly. but being able to create a named zvol of any size from your existing zpool is convenient. as is being able to
and having a named volume is important. i spent *hours* tonight just trying to figure out which lvm volume was being used by a xen vm that didn't come back up after rebooting to a new kernel (upgraded it from lenny to squeeze). and that's before i could even start fixing the problem. yes, i know that LVM can have decent (i.e. human-readable) names for volumes rather than uuid based gibberish like: /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c80282a3--2f6d--4f05--bbd3--7b6ca908494c but citrix xenserver doesn't bother doing that (or possibly the person who setup the server i inherited didn't bother). and AFAICT, there's no easy way of querying xen and getting it to tell me which device nodes a given VM is using for disks. one of those stupidly long device names alone is bad enough, but it sucks far worse when you have dozens of the bloody things in /dev/mapper (about 32 on the system i was working on tonight). and what really sucks is that you can't just get the output of 'xe vm-disk-list' and grep for the uuid in /dev/mapper - xen has the uuid's with single dashes (e.g. c0445143-15ab-4c40-9747-2481f0d55667), but in lvm they're all double dashes (e.g. c0445143--15ab--4c40--9747--2481f0d55667). WTF? WHY? i wasted most of my time tonight before i figured out that annoyance - my eyes just glaze over when i see dozens of stupidly long nearly-identical identifiers like that so i didn't see the pattern for ages. after i got the VM fixed (loopback mounted the root fs so i could examine it and edit files), i ended up writing a shell script which parsed the output of 'xe vm-list' and 'xe vm-disk-list' to make symlink farm with useful human-readable names like this: # ls -lF *staging* *prod* lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-data -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c0445143--15ab--4c40--9747--2481f0d55667 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-disk -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--73245416--20f9--43c8--a77c--fbd06a394567 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-swap -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--6ed368c4--cefd--447d--9d9f--a14b9fd243e3 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-staging-disk -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c80282a3--2f6d--4f05--bbd3--7b6ca908494c lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-staging-swap -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--2f8b5e9f--206a--4a39--b341--28c8ab63aa96 next time anything like this happens i'll have instant access to the disk volumes concerned. all this stuff may be obvious to someone who's worked with xen & lvm for years (and my symlink-farm creation script may be redundant), but i found it to be an extremely frustrating PITA (trashed my friday night because, of course, i scheduled the reboot for after 5pm...and then had to spend hours getting the VM booted again) (and to make things worse, at one point i accidentally mounted the root fs for the xenvm-staging server RW while the VM was still running - so i had to repair that too. which is a really good example of why names are better than uuids) there's also the question of why the single-dashes in xen versus double-dashes in lvm/dev mapper - are they deliberately trying to make it hard to know vital things about the underlying system? anyway, compare crap like that to (for example): zfs create -p -V 5g pool/volumes/xenvm-production-data and then just using /pool/volumes/xenvm-production-data - using a zvol gives the human-readable names of disk-images and the speed of a volume. i'm looking forward to the day when these VMs can be migrated to a new server. it almost certainly won't be running xen (probably kvm or maybe vmware), and it will probably be running zfs. but in the meantime i think i'll have to build a system from my spare parts pile so i can play with xen on a system that doesn't matter. i've got enough old disks lying around that i can try it with both lvm and zfs. craig -- craig sanders <cas@taz.net.au> BOFH excuse #87: Password is too complex to decrypt