DataCenterVirtualization

How Many Models Should a Hyperconverged Vendor Have?

DataCenterVirtualization

One of the key features of a hyperconverged infrastructure (HCI) solution is appliance-based scale-out architecture. A workload is housed on a collection of these appliances, which are the standard building blocks. The number of blocks is selected to deliver sufficient resources for the workload. But just how standard are these building blocks? Over time, I’ve seen HCI vendors offering quite a bit of variation across their models. Does this reduce or negate the value of the scale-out nature of hyperconverged?

If I build a house out of bricks, I expect all the bricks to be identical. Only that’s not true. The red clay bricks you see from the outside may all be the same, but there are likely bricks of a cheaper kind of concrete on the parts you don’t see. A different brick might be needed for a different purpose, particularly since the concrete ones are so much cheaper to make and to build a wall with. Most hyperconverged infrastructure vendors start with different sizes of nodes. The first differentiation is usually the amount of compute (CPU and RAM) per node. Then comes the size of the hard disks and the amount of flash storage per node. Once you add the ability to have all flash options and options that just expand disk or compute capacity, you can end up with a lot of options. This does seem to take away from the simplicity that hyperconverged is supposed to offer.

Hyperconverged Infrastructure: Simplicity and Scalability

Hyperconverged infrastructure is expected to offer simplicity and scalability as its core values. Appliances are sized like T-shirts: small, medium, and large. Clusters are built by adding the same size of appliance until the required capacity is reached. Physical servers are interchangeable and replaceable. The appliance is really commodity, because any appliance in the cluster can run any workload and everything is abstracted away by a great management tool. This simplicity is the promise, and in many environments HCI is that simple. Other environments make demands that break some of these basic expectations.

Differentiators

One key differentiator between environments is how hard each individual workload drives the infrastructure. Take a large VDI environment as an example. Each VM is a small load, relative to the size of a typical virtualization host. Yet, while each desktop is a small load, there are so many desktops that the total workload is huge. This is ideal for hyperconverged infrastructure, as the workload is also scale-out. One can put lots of small workloads on a node until it is full and then keep adding nodes until the entire workload is satisfied.

At the opposite extreme is, for example, a large Oracle environment. A single VM may need 16 CPU cores and 256 GB of RAM. This VM might occupy 50% of the resources of an HCI node. I doubt you would want to put more than one of these VMs on each node. You might even need to buy a whole cluster of specialized nodes just for this workload: a little cluster with only extra-large nodes or ones with more solid-state storage acceleration.

Mixing Hyperconverged Infrastructure Node Types

Each time we are driven to mixing HCI node types in a data centre, we increase complexity. This is the opposite of the simplicity drive that takes us to HCI. On the other hand, an HCI will probably still be significantly less complex than any other way to accommodate the workload. In the end, we need to be able to build and operate a platform that delivers the business’s workloads. If the most cost effective way to do this is to have multiple HCI silos, then be thankful that there are options in HCI hardware. If you are faced with needing to deploy two or more models of HCI for a project, it is worthwhile evaluating whether using just the more powerful model would provide a viable solution. There may be a slightly higher purchase cost that could be offset with lower operational cost. It may be that having fewer clusters would mean less capacity would be locked up for high availability, resulting in lower cost. As always in design, a decision made later in the process can be a reason to revisit earlier decisions.

Share this Article:

The following two tabs change content below.
Alastair Cooke
Alastair Cooke is an independent analyst and consultant working with virtualization and datacenter technologies. Alastair spent eight years delivering training for HP and VMware as well as providing implementation services for their technologies. Alastair is able to create a storied communication that helps partners and customers understand complex technologies. Alastair is known in the VMware community for contributions to the vBrownBag podcast and for the AutoLab, which automates the deployment of a nested vSphere training lab.

Related Posts:

Leave a Reply

1 Comment on "How Many Models Should a Hyperconverged Vendor Have?"

Sort by:   newest | oldest | most voted
Guest

I can’t comment on the broader picture, but VDI is becoming increasing difficult to pigeonhole into just S,M,L HCI appliances. GPU support is moving from the fringes to mainstream technology making everything build to EVO:RACK specifications obsolete. While the GPU virtualization (we can’t in all fairness refer to it as vGPU any more now that NVIDEA has trademarked the term and AMD has announces its own competing technology) is so much more complex in terms of resource allocation that a once size fits all approach is no longer good enough..

wpDiscuz