The Red Hat 6 Beta is out, and there is no Xen in it, only KVM. It can operate as a guest in an existing Xen environment, but it cannot act as a Xen host. This isn’t new news, it was widely trailed, but with the emergence of the Beta, there now appears no way back. Xen and RHEL are divorced.

For much of The Virtualization Practice audience, this may seem peculiar. Why (and even how) could Red Hat turn its back on a world-class Type 1 Hypervisor and run with this peculiar Type 2 thing, that nobody has really heard of, and that has such a low profile that people even confuse it with a monitor/keyboard switch also called a KVM?First on this question of is it Type 1 or is it Type 2, the answer from the KVM community is to question the validity of the question.  Red Hat marketing have even been known to say that KVM converts Linux into a Type 1 hypervisor, contrasting that with Xen where I/O is scheduled by DOM0, which is a guest, so actually it is Xen that could be considered to be a type 2 hypervisor.  Now, this is likely splitting hairs, the point here is that the architecture is different from either Xen or ESXi, but because of the way that the KVM gives access to the device drivers inside the Linux kernel, the code path by which I/O, memory, CPU are accessed or scheduled isn’t necessarily any longer than with a Type 1 hypervisor.  Red Hat in its enhancements of the Linux Kernel for RHEL 6 (see below) has spent a lot of effort on shortening the code paths that give guests access to the  devices.  It has also allowed guests to share memory pages if they contain the same content to help reduce memory footprint.  I guess the real answer here is either

  1. go benchmark (preferably on your own application mix)
  2. buy another server just to be on the safe side, and spare the cost of benchmarking

Second, on the question of how does Red Hat get away with this, in the Linux space there are only three real power-brokers, Red Hat, where most of the money is, the “Debian Hardcore” of about a thousand individual developers (and one elected lead) which maintain the Debian Gnu/Linux base, and Linus Torvalds. Once three forces are aligned, the rest of the community (of mainly consumers and packagers) falls into place. These forces are aligned around KVM. The Debian Hardcore and Linus never liked Xen, maybe it was technical, maybe it was just the faces didn’t fit. Red Hat liked KVM so much they bought the company.

A few minority interests still cling to Xen, but ultimately it makes no sense for most Linux distributions to ship with Xen. If you want KVM you simply go to kernel.org and pick up a kernel. It contains both kernel and KVM hypervisor and is pre-tested and pre-integrated. Job done. With Xen you need paravirt_ops support in the Linux kernel, and then you need Xen and associated tooling and then you need a Linux Kernel and you need to package them together, and crucially you need to test them together, and not just one chip architecture but on a broad range of architectures that you aim to support. Novell will stick with Xen for a while, and also Oracle, because they are no friend of Red Hat, but when the hypervisor wars become old news, they will quietly move to KVM. It’s easier.  In future we fully expect to be talking about Xen/Linux in the past tense.

Now, certainly Red Hat does not simply go to kernel.org, they have a custom kernel based around a specific kernel release but with various enhancements downstream of that release, and they migrate this kernel forward in point releases to deal with new hardware as it emerges during the lifetime of the product.The Red Hat Enterprise Linux 6 kernel is a hybrid with some enhancements from 2.6.34 (the latest current version), some from previous versions, and over time will incorporate enhancements from future versions.  It’s all a bit messy if you are used to a distribution which tracks the kernel.org releases, but basically Red Hat has the resources to fork the kernel development stream, and the understanding to do so because it contributes large chunks to the kernel.

However, unlike us, who are obsessed with Virtualization, the folks at Red Hat are fairly pragmatic about it. The key use case for them is the customer who goes out and buys some big servers and wants to consolidate some larger number of smaller Red Hat instances onto their new kit. In that case, they license the big servers for Red Hat. It runs KVM and they install Red Hat into Red Hat as guests. KVM is simply to provide better resource utilization, manageability and availability for Red Hat within the hardware running Red Hat. It is not really about getting Windows running on KVM (although that is possible), or Red Hat on Hyper-v, and yes that works too.

The argument here is that if you already have Red Hat, you have the skills to run Red Hat as guest in Red Hat, so let your Red Hat sysadmins virtualize their estate, and your Microsoft sysadmins virtualize their estate. As Production use of virtualization grows inside an organization there will be enough servers around that you’ll never really need to mix Red Hat and Microsoft on the same hypervisor in Production. (Incidentally, if you did mix the O/Ses you would find yourself paying extra licensing fees, whichever way you chose to virtualize). At the end of this, you’ll end up with Red Hat virtualization tooling managing your Red Hat/KVM estate, and Microsoft virtualization tooling managing your Microsoft/Hyper-V estate, just as you always did. What you won’t need is some all-encompassing infrastructure like vSphere to co-ordinate it all. As part of the transition off Xen, the Red Hat tooling migrates seamlessly because it is all based on a single API, libvirt.  In any case, tooling is less of an issue in this space because the expectation of Linux sysadmins that they will be using command line tools to manage systems, and anything else is a dubious bonus.

There is still a debate however going on here between the analysts  at the Virtualization Practice.  Is this vision of “O/S subsumes virtualization” right, or conversely is it “Hardware subsumes vitualization”, which is where VMware seem to be heading.

Share this Article:

Share Button
Mike Norman (104 Posts)

From 2009 to 2014 Dr Mike Norman was the Analyst at The Virtualization Practice for Open Source Cloud Computing. He covered PaaS, IaaS and associated services such as Database as a Service from an open source development and DevOps perspective. He has hands-on experience in many open source cloud technologies, and an extensive background in application lifecycle tooling; automated testing - functional, non-functional and security; digital business and DevOps. in 2014 he moved on to become Cloud Services Architect at JP Morgan Chase.

Connect with Mike Norman:


Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *


3 × four =