In-Kernel vs. VSA: The Phony War of Hyperconverged Marketing

Working on the edges of marketing is interesting. As a technical person, I sometimes find that marketing people do strange things. I find it particularly funny when marketing departments from competing vendors have public arguments that are irrelevant to their customers. I see that going on now between some of the hyperconverged infrastructure (HCI) vendors. These HCI vendors are arguing that their choice of in-kernel or VSA-based storage clustering is better. While it is an architectural point of difference, I don’t think that customers care, since it doesn’t change how they choose or use HCI. I’ll take a look at the differences between VSA and in-kernel. However, I’ll close with a review of what customers actually care about.

In-kernel means that the same piece of software that delivers CPU and RAM to the VMs owns the physical disks. The software that turns the local disk into redundant and shared storage is inside the hypervisor. The top examples of in-kernel are VMware’s VSAN and Scale Computing’s HC3. Both systems put the storage cluster software inside the hypervisor and deliver special-purpose storage that can only be consumed by their hypervisor. Each solution works with just one hypervisor: ESXi for VMware and KVM for Scale. In-kernel is only an option for vendors that can modify the hypervisor: hypervisor vendors or developers of custom open-source hypervisor variants. In-kernel solutions each only support one hypervisor. Vendors of in-kernel solutions say that they are more efficient, as the IO from a VM only needs to pass through one (hypervisor) IO stack. Detractors point to the inherent single-hypervisor and single-vendor restrictions.

VSA means that a virtual machine owns the physical disk and is responsible for turning local storage into redundant shared storage. Almost every hyperconverged infrastructure vendor uses a VSA; they write their own storage cluster software that runs inside a VM. Because the software is inside a VM, there is no requirement to modify the hypervisor. The storage cluster delivers a standard IP-based storage protocol to the hypervisor, usually NFS, iSCSI, or SMB3. One nice thing is the ability for the same VSA to support multiple storage protocols and, therefore, multiple hypervisors. Nutanix uses this flexibility as a VM migration method, migrating VMs from vSphere onto its own Acropolis platform. The downside of VSA is that the IO from the VM goes to the hypervisor, which then passes it to the VSA before the IO is passed to the physical disks. With VSA, the IO passes through two IO stacks: hypervisor and VSA.

So, in-kernel is only an option for a very small number of vendors, and they talk up IO efficiency. The reality for customers is that they only need the HCI to deliver the IO performance that their application requires. The simplified management that is central to HCI involves not caring so much how that performance is achieved. VSA is available to far more vendors, and they talk up flexibility. The flexibility to choose a different hypervisor does seem to be attractive to customers when they buy an HCI. The conversation usually goes something like “We want to use our current hypervisor with HCI but would like to know we can change later.” My suspicion is that few will ever migrate their HCI from one hypervisor to another, and that it is just the idea of options that is appealing.

If customers don’t care about in-kernel versus VSA, then what do they care about when they select an HCI? It is a complex decision, and each customer has a different mix of requirements. Some care a lot about scalability. At small scale, Scale Computing and SimpliVity have solutions that are lower in cost than some other HCIs. At massive scale, Nutanix has a strong story. Simplicity is a common thread in HCI. But what that simplicity means depends on the vendor: sometimes it is integration with existing tools, and sometimes it is a new simple tool. Some customers are constrained to buying from specific server vendors, so partnerships are important. Springpath and SimpliVity are available on Cisco hardware, while Nutanix is available from Dell. But the big challenge is getting HCI considered. The real competitor for the HCI vendors is not other HCI vendors yet. The competitor is traditional silo-based infrastructure with separate server and storage arrays.

The battle of in-kernel vs. VSA for hyperconverged storage is a lot of smoke and noise, but it signifies nothing. Don’t get me started on whether “hyper” means four dimensions, meaning you must converge four resource types. This is irrelevant and misguided point-scoring. HCI vendors need to focus on the value that they deliver to customers rather than fighting among themselves about who has the best or purest hyperconverged solution.

 

Share this Article:

The following two tabs change content below.
Alastair Cooke
Alastair Cooke is an independent analyst and consultant working with virtualization and datacenter technologies. Alastair spent eight years delivering training for HP and VMware as well as providing implementation services for their technologies. Alastair is able to create a storied communication that helps partners and customers understand complex technologies. Alastair is known in the VMware community for contributions to the vBrownBag podcast and for the AutoLab, which automates the deployment of a nested vSphere training lab.

Related Posts:

Leave a Reply

5 Comments on "In-Kernel vs. VSA: The Phony War of Hyperconverged Marketing"

Sort by:   newest | oldest | most voted
Guest
Alistair, The article is well written, and a good read. It does glide over a set of salient facts when it comes to the SMB space that I feel are of significance. VSA’s are not free (from a resources consumed perspective). In the enterprise space, that is of lesser importance due to what one would gain from a data and platform management perspective. However, for the SMB space, that is very much not the case. For example, Simplivity does a wonderful job with data reduction ( a very good thing for the large enterprise that has hundreds of TB of… Read more »
Admin
Hello Alan, I run a small environment that uses but In-Kernel and VSA. I can run my whole environment on either with what appears to be the same impact on resources. The costs are also roughly the same. So given this, the perceived differences have to be greater to make any sort of declaration on which is better. For me it comes down to features, price, and data availability and protection. Each organizations environment is different enough that generic speeds and feeds really do not make much of a difference. It ends up about perception. My environment tells me that… Read more »
Guest
Hi Edward, Good to meet you. While I understand what you are saying, my point goes to the fact that for a VSA based approach to run, it has to be over-provisioned from a hardware spec to get to the same baseline as an in-kernel approach. For example, Scale has an entry line of HC3’s called the HC1000 that happily runs 10-20 vm’s in HA, has a price point in the low 20k range, and is powered by just the same resource as the VSA’s use in other approaches (12 cores and ~100GB usable RAM), but rather than using them… Read more »
Admin
Overcommit resources, sorry not buying that argument either. Most folks are already over provisioned for CPU, Memory, but not storage. So whether I use one or the other is just moving the allocation around the system. Will I use less storage? NO. Less Memory? Probably not, you can always use more. Will I be using less CPU? not really. All I am is moving things around depending on the technology I use. Each allocates something. But most things will not change. So once more it comes down to functionality, price, and availability. Not where I allocate the resources (or are… Read more »
wpDiscuz