
Xen has been widely regarded as the best performance VM for Linux for a long time. Oracle has been one of the advocates of Xen claiming very close to native hardware performance. # 300% improvement in UnixBench score, with a KVM Linode vs a Xen Linode # 28% faster at compiling a Linux kernel with a KVM Linode vs a Xen Linode # Boot and shutdown times are greatly improved Now Linode (one of the largest Xen sites) is moving to KVM, they list the above as benefits of KVM which surprises me. My experience of Xen is that the only way anything could be 300% faster is if it's an issue of disk IO scheduling on hard drives (as multiple virtual machines on the same spinning media causes contention and/or fragmentation issues depending on how you do it. But given that Linode was already using SSD for all storage that's obviously not what they are doing. The last time I tried KVM on my laptop the performance was a lot slower than native performance as opposed to Xen which was near enough to native hardware performance that the difference didn't matter. I've never even tested KVM on a server because the performance on my laptop (admittedly a couple of years ago) was very disappointing. Last time I tested KVM performance was not only noticably worse (EG compiles of selinux-policy-default taking about 50% longer) but the increase in CPU use was an issue of cooling. Has KVM improved a lot recently? How can anything be so much better than Xen when Xen has been so close to native performance for so long? I've just chosen KVM for a new Linode instance. They allow me to choose Xen but say that KVM is the way of the future - presumably I would be forced to use KVM sooner or later so it seemed easier to use it now. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi, On 14/10/2015 4:20 PM, Russell Coker wrote:
Has KVM improved a lot recently? How can anything be so much better than Xen when Xen has been so close to native performance for so long?
I've just chosen KVM for a new Linode instance. They allow me to choose Xen but say that KVM is the way of the future - presumably I would be forced to use KVM sooner or later so it seemed easier to use it now.
I was using just Xen, but a machine I needed to use never worked with Xen so I moved that to KVM -- I've still another using Xen, but others are using KVM. I too think that KVM is the future, just like BTRFS ... only KVM is in my current life. Now, I haven't done any specific testing and I'm unlikely to, but I was of the opinion that both Xen and KVM were fairly comparable. Perhaps it is a two horse race and each pulls ahead for a period as the other catches up and takes the lead once more. For mine, both work, both should be fine. If you must have best performance, then best you test with your use case. Kind Regards AndrewM -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iF4EAREIAAYFAlYd6v8ACgkQqBZry7fv4vsR3AD+J2rQx/LLcINKWPskFUCzGZOZ 4nhwK5iG4lo92T7X8FQBAKMYZaAs7zrbjnUCBJ66ZzYhhIrtxesnQkLc9h1MFiya =986i -----END PGP SIGNATURE-----

On Wed, Oct 14, 2015 at 04:20:13PM +1100, Russell Coker wrote:
The last time I tried KVM on my laptop the performance was a lot slower than native performance as opposed to Xen which was near enough to native hardware performance that the difference didn't matter. I've never even tested KVM on a server because the performance on my laptop (admittedly a couple of years ago) was very disappointing. Last time I tested KVM performance was not only noticably worse (EG compiles of selinux-policy-default taking about 50% longer) but the increase in CPU use was an issue of cooling.
Has KVM improved a lot recently?
According to the man page for the /usr/bin/kvm wrapper script, it no longer falls back to emulation mode if kvm support is unavailable. `man kvm` says: The script executes qemu-system-x86_64 -enable-kvm passing all other command-line arguments to the qemu binary. This is not the same as old kvm binary, which was using less strict construct, similar to qemu-system-x86_64 -machine accel=kvm:tcg New wrapper ensures that kvm mode is enabled, or the VM will not start, while old code falled back to emulation (tcg) mode if kvm isn't avail- able. if your laptop didn't have kvm properly installed and configured (or your CPU didn't have virtualisation extensions), then it would have fallen back to slow emulation mode.
How can anything be so much better than Xen when Xen has been so close to native performance for so long?
personally, i've never noticed any significant difference between kvm and xen performance...at least, not on modern virtualisation-enabled CPUs. craig -- craig sanders <cas@taz.net.au>

Russell Coker <russell@coker.com.au> wrote:
Now Linode (one of the largest Xen sites) is moving to KVM, they list the above as benefits of KVM which surprises me.
For anyone who has Xen instances with Linode, note that they have a migration option that you can invoke, although they make it clear that all instances will switch to KVM eventually. If you're running your own kernel rather than theirs, the migration to Xen also entails a welcome upgrade to Grub 2 - now I no longer have to deal with Grub 1 anywhere. I have run kvm on a laptop, but not recently, and I didn't undertake performance testing. As with other virtualizzation tools, there's an advantage to running paravirtualization, as is the default for Linux guests; full virtualization is available for running BSD or something else as your guest system.

Russell Coker writes:
The last time I tried KVM on my laptop the performance was a lot slower than native performance as opposed to Xen which was near enough to native hardware performance that the difference didn't matter.
Note that (last time I looked), qemu/kvm do not use virtio by default. -drive file=foo.squashfs,index=0,media=disk,if=virtio -net nic,model=virtio -net user There's doubtless a bunch of other ricing you can do.

For what they are worth, there numerous reviews/comparisons Xen v/s KVM on the web. In fact I was reading some only a few days ago. My impression is that: (I have been using KVM for over 5 yrs, and have very little Xen experience)) 1. While 4-5 yrs ago Xen was the indisputable winner regarding performance, KVM has improved gradually and the difference is not so great anymore, if any. 2. The main reason for KVM improvement would come from the virtio drivers for disk i/o. (Running an OS install with and without virtio enabled, will show the obvious difference) 3. There is not much between the 2 now. You could probably craft a particular configuration and usage scenario where KVM beats Xen and vice versa. Generally you are taking of a few % points difference either way. 4. There are still improvement being made in KVM. And you can be caught with even fairly recent versions (eg: in Ubuntu 14.04) not having all the latest features working properly. For example, I got caught with live merging of snapshots not working as expected with Ubuntu 14.04, but works in Jessie (it is a Qemu version issue). So you would have to make sure that whatever you were used to in Xen featurewise can be reproduced with KVM - it may or may not be the case. 5. KVM stability is rock solid for the yrs I have been using it. 6. There are many configuration options for KVM, so there are many ways to tune it for your particular usage. Although, if you are using hosted VMs these would be pre-determined by the provider, and you would have little or no configuration parameters for you to play with. (which would be the same with Xen I'd imagine!!) 7. Virt-manager works well with both for what I have ever needed it to and keeps improving. Daniel. On 14/10/15 16:20, Russell Coker wrote:
Xen has been widely regarded as the best performance VM for Linux for a long time. Oracle has been one of the advocates of Xen claiming very close to native hardware performance.
# 300% improvement in UnixBench score, with a KVM Linode vs a Xen Linode # 28% faster at compiling a Linux kernel with a KVM Linode vs a Xen Linode # Boot and shutdown times are greatly improved
Now Linode (one of the largest Xen sites) is moving to KVM, they list the above as benefits of KVM which surprises me. My experience of Xen is that the only way anything could be 300% faster is if it's an issue of disk IO scheduling on hard drives (as multiple virtual machines on the same spinning media causes contention and/or fragmentation issues depending on how you do it. But given that Linode was already using SSD for all storage that's obviously not what they are doing.
The last time I tried KVM on my laptop the performance was a lot slower than native performance as opposed to Xen which was near enough to native hardware performance that the difference didn't matter. I've never even tested KVM on a server because the performance on my laptop (admittedly a couple of years ago) was very disappointing. Last time I tested KVM performance was not only noticably worse (EG compiles of selinux-policy-default taking about 50% longer) but the increase in CPU use was an issue of cooling.
Has KVM improved a lot recently? How can anything be so much better than Xen when Xen has been so close to native performance for so long?
I've just chosen KVM for a new Linode instance. They allow me to choose Xen but say that KVM is the way of the future - presumably I would be forced to use KVM sooner or later so it seemed easier to use it now.

Daniel Jitnah writes:
2. The main reason for KVM improvement would come from the virtio drivers for disk i/o. (Running an OS install with and without virtio enabled, will show the obvious difference) [...] 7. Virt-manager works well with both for what I have ever needed it to and keeps improving.
NB: when using kvm via virtd, you are only able to configure the parts that virtd wraps. For example, I'm not sure you can use -net user *at all*. OTOH virtd is more likely (than a newbie) to pick fast defaults :-) The exact options virtd uses to invoke qemu/kvm are logged in /var/log/libvirt/qemu/<name>.log (as at 1.2.4-1~bpo70+1). Here's an example, you can see it uses virtio heavily: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -name twb -S -machine pc-0.12,accel=kvm -m 1024 -smp 4,sockets=4,cores=1,threads=1 -uuid 1d43ba9c-1e95-dfaa-f38a-c5581ca14b3a -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/twb.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/srv/kvm/twb.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=23,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=06:00:00:00:34:69,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 PS: last time I looked, virt-manager GUI lagged behind the main project significantly, and was full of bugs. I can cheerfully recommend using virsh(8) instead.

I'm surprised, I thought LXC would have pretty much supplanted Xen entirely by now. On Mon, 26 Oct 2015 at 11:19 Trent W. Buck <trentbuck@gmail.com> wrote:
Daniel Jitnah writes:
2. The main reason for KVM improvement would come from the virtio drivers for disk i/o. (Running an OS install with and without virtio enabled, will show the obvious difference) [...] 7. Virt-manager works well with both for what I have ever needed it to and keeps improving.
NB: when using kvm via virtd, you are only able to configure the parts that virtd wraps. For example, I'm not sure you can use -net user *at all*.
OTOH virtd is more likely (than a newbie) to pick fast defaults :-)
The exact options virtd uses to invoke qemu/kvm are logged in /var/log/libvirt/qemu/<name>.log (as at 1.2.4-1~bpo70+1).
Here's an example, you can see it uses virtio heavily:
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -name twb -S -machine pc-0.12,accel=kvm -m 1024 -smp 4,sockets=4,cores=1,threads=1 -uuid 1d43ba9c-1e95-dfaa-f38a-c5581ca14b3a -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/twb.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/srv/kvm/twb.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=23,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=06:00:00:00:34:69,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
PS: last time I looked, virt-manager GUI lagged behind the main project significantly, and was full of bugs. I can cheerfully recommend using virsh(8) instead.
_______________________________________________ luv-main mailing list luv-main@luv.asn.au http://lists.luv.asn.au/listinfo/luv-main
participants (7)
-
Andrew McGlashan
-
Craig Sanders
-
Daniel Jitnah
-
Jason White
-
Russell Coker
-
Toby Corkindale
-
trentbuck@gmail.com