[luv-main] Virtualbox and Linux-Vserver issues

Hi all, Today, my server was rebooted (due to a 17min power interuption), and I was greeted by an email from it saying that it couldn't start up Virtualbox. My server is running Debian's kernel image 2.6.32-5-vserver-amd64. Yes, I AM running two virtualisation solutions on here - I didn't like Xen when I tried it for a couple of months, I just couldn't get the hang of it despite reading heaps of tutorials about it. Linux Vserver provides a very light (in terms of overhead) way of running multiple virtualised Linux "servers" through extensive use of security contexts and the like. But I also wanted to run a single Windows server instance, so hence the use of Virtualbox. The virtualbox email states that I need to ensure I have virtualbox-ose-dkms and the kernel header files installed. I have virtualbox-ose-dkms installed, but somewhere along the way my linux-headers-2.6.32-5-vserver-amd64 package became uninstalled. Instead I have headers for 2.6.37-1, 2.6.37-2, 2.6.38-1, 2.6.38-2, 2.6.39-2 and 3.0.0-1, but none of the corresponding kernel images... Go figure. So I'm thinking I'll upgrade to a newer kernel, but none of the kernel images mention vserver in their name, nor directly indicate that they support it either. I am willing to go down the path of rolling my own, and packaging it up with make-kpkg (from kernel-package) so that I don't break apt's idea of what's installed... Therefore, can anyone confirm for me if I upgrade to 2.6.39 or 3.0.0, will I still be able to use my Linux vservers (I really do NOT want to have to rebuild them all within another virtualisation environment)? Or has Xen become the defacto (and possibly only) virtualisation system that Debian's pre-packaged kernels will support? Cheers, Tim Lyth.

Tim Lyth <tcl@tcl.homedns.org> wrote:
Therefore, can anyone confirm for me if I upgrade to 2.6.39 or 3.0.0, will I still be able to use my Linux vservers (I really do NOT want to have to rebuild them all within another virtualisation environment)? Or has Xen become the defacto (and possibly only) virtualisation system that Debian's pre-packaged kernels will support?
There are also Linux containers (lxc), but as I remember, there are limits to what these will virtualize, i.e., they aren't a complete virtualization solution at the moment. I can't remember the details, and, in any case, those details change as the kernel develops. I know this doesn't answer your question, but you may be able to run the guest systems in containers if the features supported by kernels you're prepared to use are adequate.

Jason White wrote:
Tim Lyth <tcl@tcl.homedns.org> wrote:
Therefore, can anyone confirm for me if I upgrade to 2.6.39 or 3.0.0, will I still be able to use my Linux vservers (I really do NOT want to have to rebuild them all within another virtualisation environment)? Or has Xen become the defacto (and possibly only) virtualisation system that Debian's pre-packaged kernels will support?
There are also Linux containers (lxc), but as I remember, there are limits to what these will virtualize, i.e., they aren't a complete virtualization solution at the moment.
vserver and openvz are out-of-tree, and Ubuntu dropped support for them in 10.04 LTS (running 2.6.32). For that reason, I migrated to LXC, which is blessed by Ubuntu *and* Red Hat *and* it's in the mainline kernel, so you get it out of the box. I wouldn't recommend LXC on 2.6.32; you have to jump through hoops to lock it down, and even now root can probably break out of my containers in a few ways. It's also immature around the edges -- for example "free" reports the system-wide resource limit and consumption, not the container's. Oh, and Ubuntu issued a "security" update for the kernel to fix a DOS in vsftpd, by turning off namespaces (i.e. broke LXC). So if you want LXC on 10.04 you can either run 2.6.32-32 and get no kernel security updates, a backported 2.6.38 that breaks all the time, or maintain your own kernel packages. Grr, Trent SMASH!

Trent W. Buck <trentbuck@gmail.com> wrote:
vserver and openvz are out-of-tree, and Ubuntu dropped support for them in 10.04 LTS (running 2.6.32). For that reason, I migrated to LXC, which is blessed by Ubuntu *and* Red Hat *and* it's in the mainline kernel, so you get it out of the box.
That's its most important advantage, I agree.
I wouldn't recommend LXC on 2.6.32; you have to jump through hoops to lock it down, and even now root can probably break out of my containers in a few ways. It's also immature around the edges -- for example "free" reports the system-wide resource limit and consumption, not the container's.
For others considering this option, the interesting question would be whether it has improved in later kernels. for my limited virtualization needs (basically, a test system that I can boot and experiment with when I want to try something but not on a system needed for real work), kvm is perfectly suitable.

On 06/10/11 18:48, Tim Lyth wrote: If VIrtualBox is part of the problem - would you be able to try KVM instead? Most likely the kernel header problem would disappear? I usually interchange between the 2 quite easily and using same VM image. Daniel
Hi all,
Today, my server was rebooted (due to a 17min power interuption), and I was greeted by an email from it saying that it couldn't start up Virtualbox.
My server is running Debian's kernel image 2.6.32-5-vserver-amd64.
Yes, I AM running two virtualisation solutions on here - I didn't like Xen when I tried it for a couple of months, I just couldn't get the hang of it despite reading heaps of tutorials about it. Linux Vserver provides a very light (in terms of overhead) way of running multiple virtualised Linux "servers" through extensive use of security contexts and the like. But I also wanted to run a single Windows server instance, so hence the use of Virtualbox.
The virtualbox email states that I need to ensure I have virtualbox-ose-dkms and the kernel header files installed. I have virtualbox-ose-dkms installed, but somewhere along the way my linux-headers-2.6.32-5-vserver-amd64 package became uninstalled. Instead I have headers for 2.6.37-1, 2.6.37-2, 2.6.38-1, 2.6.38-2, 2.6.39-2 and 3.0.0-1, but none of the corresponding kernel images... Go figure. So I'm thinking I'll upgrade to a newer kernel, but none of the kernel images mention vserver in their name, nor directly indicate that they support it either.
I am willing to go down the path of rolling my own, and packaging it up with make-kpkg (from kernel-package) so that I don't break apt's idea of what's installed...
Therefore, can anyone confirm for me if I upgrade to 2.6.39 or 3.0.0, will I still be able to use my Linux vservers (I really do NOT want to have to rebuild them all within another virtualisation environment)? Or has Xen become the defacto (and possibly only) virtualisation system that Debian's pre-packaged kernels will support?
Cheers, Tim Lyth. _______________________________________________ luv-main mailing list luv-main@lists.luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
-- ---------------------------------------- Daniel Jitnah Melbourne, Australia e: djitnah@greenwareit.com.au w: www.greenwareit.com.au SIP: dj-git@ekiga.net ---------------------------------------- ** For All your Linux, Open Source and IT requirements visit: www.greenwareit.com.au ** -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. For All your Open Source and IT requirements see: www.greenwareit.com.au

On Thu, Oct 06, 2011 at 06:48:21PM +1100, Tim Lyth wrote:
The virtualbox email states that I need to ensure I have virtualbox-ose-dkms and the kernel header files installed. I have virtualbox-ose-dkms installed, but somewhere along the way my linux-headers-2.6.32-5-vserver-amd64 package became uninstalled. Instead I have headers for 2.6.37-1, 2.6.37-2, 2.6.38-1, 2.6.38-2, 2.6.39-2 and 3.0.0-1, but none of the corresponding kernel images... Go figure.
So I'm thinking I'll upgrade to a newer kernel, but none of the kernel images mention vserver in their name, nor directly indicate that they support it either.
why not just reinstall the vserver header packages from squeeze? "apt-get install linux-headers-2.6-vserver-amd64" should do it, as long as you have deb entries for squeeze aka "stable" in your /etc/apt/sources.list (and 'apt-get purge' the unwanted header packages for 2.6.37, 2.6.38 etc while you're at it) then you can rebuild your virtualbox-ose modules with dkms.
I am willing to go down the path of rolling my own, and packaging it up with make-kpkg (from kernel-package) so that I don't break apt's idea of what's installed...
according to linux-vserver.org, there's a vs2.3.1-pre10.1 experimental patch suitable for linux 3.0.4 you could try downloading the debianised source for 3.0, applying the patch, and then building the kernel packages (don't forget the headers :) if that doesn't work, get the upstream linux 3.0.4 .tar.gz, apply the patch, and use make-kpkg. if that doesn't work, stick to 2.6.32-5 personally, i think you're making a maintainence nightmare for yourself (especially with vserver not being part of the mainline kernel) and you'd be better off using just ONE virtualisation method - maybe virtualbox since you're already using that for your windows VM. or kvm. perhaps even xen. the management tools for all of them are all converging anyway due to libvirt[1], which can work with all three (and vmware too). [1] http://libvirt.org/ there's no mention of vserver on libvirt.org, but it does support openvz and uml which are similar container-style virtualisation systems for linux.
Therefore, can anyone confirm for me if I upgrade to 2.6.39 or 3.0.0, will I still be able to use my Linux vservers (I really do NOT want to have to rebuild them all within another virtualisation environment)?
sorry, can't confirm that. as mentioned above, there is an experimental patch you can try.
Or has Xen become the defacto (and possibly only) virtualisation system that Debian's pre-packaged kernels will support?
KVM[2] is probably the de-facto standard for linux (incl. debian) these days. It's been included in the mainline kernel since 2.6.20, and most (all?) of the major linux distros have declared that KVM is or is going to be the basis of their virtualisation efforts (e.g. redhat dumped xen for kvm) Xen has only recently entered the mainline kernel....it was always a separate fork before then, which meant you had a choice between recent kernels and xen (i.e. the same sort of problem as you're having with the out-of-mainline vserver). This will probably result in a bit of a revival for xen, as a lot of people JCBF dealing with out-of-tree patches when kvm was already in the kernel. having used kvm (a fair bit), virtualbox (somewhat) and xen (a little), i much prefer kvm. kvm works really nicely with ZFS ZVOLs too - haven't done any performance tests vs LVM, but it flies compared to disk image files on xfs. [2] http://www.linux-kvm.org/ craig -- craig sanders <cas@taz.net.au> BOFH excuse #304: routing problems on the neural net

Why would there be any connection between VM technology and filesystem unless you use VM specific COW systems? ZFS beats XFS in your tests for KVM, why wouldn't it beat XFS for every VM? -- My blog http://etbe.coker.com.au Sent from an Xperia X10 Android phone

On Thu, Oct 06, 2011 at 10:26:40PM +1100, Russell Coker wrote:
Why would there be any connection between VM technology and filesystem unless you use VM specific COW systems?
There isn't, particularly. but being able to create a named zvol of any size from your existing zpool is convenient. as is being able to instantly snapshot and/or clone the zvol (havent tested this, but AIUI you can even clone a zvol while the VM is running...which beats the hell out of having to shutdown a running VM running from a disk image just so you can clone it). zfs & zvol compression is nice too. add to that ZFS' built-in support for NFS & iscsi exports and you've got something extremely useful for virtualisation.
ZFS beats XFS in your tests for KVM, why wouldn't it beat XFS for every VM?
when did i say it wouldn't? what i said was: "kvm works really nicely with ZFS ZVOLs too - haven't done any performance tests vs LVM, but it flies compared to disk image files on xfs." no mention of other VMs & ZFS at all. I can't think of any reason why Xen or VirtualBox or VMWare or any similar virtualisation system wouldn't benefit from ZFS in the same way that KVM would. They work well with LVM, so they'll probably work well with ZFS. Even container style VMs like vserver would benefit from being on a zfs filesystem (as opposed to a zfs zvol) if, e.g., you had a nice fast SSD caching it. and (if you have enough RAM and/or SSD cache) ZFS' online de-duping would be useful. solaris zones have been running on zfs for years. craig -- craig sanders <cas@taz.net.au> BOFH excuse #285: Telecommunications is upgrading.

On Thu, Oct 06, 2011 at 11:31:34PM +1100, Craig Sanders wrote:
On Thu, Oct 06, 2011 at 10:26:40PM +1100, Russell Coker wrote:
Why would there be any connection between VM technology and filesystem unless you use VM specific COW systems?
There isn't, particularly. but being able to create a named zvol of any size from your existing zpool is convenient. as is being able to
and having a named volume is important. i spent *hours* tonight just trying to figure out which lvm volume was being used by a xen vm that didn't come back up after rebooting to a new kernel (upgraded it from lenny to squeeze). and that's before i could even start fixing the problem. yes, i know that LVM can have decent (i.e. human-readable) names for volumes rather than uuid based gibberish like: /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c80282a3--2f6d--4f05--bbd3--7b6ca908494c but citrix xenserver doesn't bother doing that (or possibly the person who setup the server i inherited didn't bother). and AFAICT, there's no easy way of querying xen and getting it to tell me which device nodes a given VM is using for disks. one of those stupidly long device names alone is bad enough, but it sucks far worse when you have dozens of the bloody things in /dev/mapper (about 32 on the system i was working on tonight). and what really sucks is that you can't just get the output of 'xe vm-disk-list' and grep for the uuid in /dev/mapper - xen has the uuid's with single dashes (e.g. c0445143-15ab-4c40-9747-2481f0d55667), but in lvm they're all double dashes (e.g. c0445143--15ab--4c40--9747--2481f0d55667). WTF? WHY? i wasted most of my time tonight before i figured out that annoyance - my eyes just glaze over when i see dozens of stupidly long nearly-identical identifiers like that so i didn't see the pattern for ages. after i got the VM fixed (loopback mounted the root fs so i could examine it and edit files), i ended up writing a shell script which parsed the output of 'xe vm-list' and 'xe vm-disk-list' to make symlink farm with useful human-readable names like this: # ls -lF *staging* *prod* lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-data -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c0445143--15ab--4c40--9747--2481f0d55667 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-disk -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--73245416--20f9--43c8--a77c--fbd06a394567 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-production-swap -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--6ed368c4--cefd--447d--9d9f--a14b9fd243e3 lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-staging-disk -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--c80282a3--2f6d--4f05--bbd3--7b6ca908494c lrwxrwxrwx 1 root root 112 Oct 7 20:42 xenvm-staging-swap -> /dev/mapper/VG_XenStorage--4c3d5e95--f3ef--e278--442e--e59cb00936e4-LV--2f8b5e9f--206a--4a39--b341--28c8ab63aa96 next time anything like this happens i'll have instant access to the disk volumes concerned. all this stuff may be obvious to someone who's worked with xen & lvm for years (and my symlink-farm creation script may be redundant), but i found it to be an extremely frustrating PITA (trashed my friday night because, of course, i scheduled the reboot for after 5pm...and then had to spend hours getting the VM booted again) (and to make things worse, at one point i accidentally mounted the root fs for the xenvm-staging server RW while the VM was still running - so i had to repair that too. which is a really good example of why names are better than uuids) there's also the question of why the single-dashes in xen versus double-dashes in lvm/dev mapper - are they deliberately trying to make it hard to know vital things about the underlying system? anyway, compare crap like that to (for example): zfs create -p -V 5g pool/volumes/xenvm-production-data and then just using /pool/volumes/xenvm-production-data - using a zvol gives the human-readable names of disk-images and the speed of a volume. i'm looking forward to the day when these VMs can be migrated to a new server. it almost certainly won't be running xen (probably kvm or maybe vmware), and it will probably be running zfs. but in the meantime i think i'll have to build a system from my spare parts pile so i can play with xen on a system that doesn't matter. i've got enough old disks lying around that i can try it with both lvm and zfs. craig -- craig sanders <cas@taz.net.au> BOFH excuse #87: Password is too complex to decrypt
participants (6)
-
Craig Sanders
-
Daniel Jitnah
-
Jason White
-
Russell Coker
-
Tim Lyth
-
Trent W. Buck