DataCenterVirtualization

Scale-Out Is a Benefit to HyperConverged

DataCenterVirtualization

I recently upgraded my nodes from 96 GB of memory to 256 GB of memory, and someone on Twitter stated the following:

@Texiwill thought the trend today is scale out not scale up? #cloud

The implication was that you never upgrade your hardware: you buy new or you enter the cloud. Granted, both options are beneficial. However, buying new and adding to your environment may not be necessary, and you most likely have already entered the cloud with the use of SaaS applications and perhaps some IaaS. The question still remains: upgrade, enhance existing hardware, or buy net new somewhere? When should you do any of these? Or should you at all?

Currently, hyperconverged environments sell you a specific amount of compute, network, memory, and storage as a unit. That unit cannot necessarily be grown without violating a licensing agreement or statement of what is under support. In this case, the only way to grow your environment is to buy net-new hardware and add it to your existing hyperconverged environment. The hyperconverged units then become the building blocks of your virtual or cloud environment.

But let us look at a real environment for a moment. The current capacity on each node is:

CPU: 30% Utilized

Memory: 95% Utilized

Network: 50% Utilized

Storage: 50% Utilized

These numbers tell me first that the environment is at its memory limits. If a node fails, I better have enough spare memory capacity to run all my workloads on all my other nodes, but I doubt that is the case. So the decision needs to be made: buy more memory, buy net-new hardware, or move workloads to the cloud.

There are a number of capacity management tools that will help you predict when such capacity limits will be met and to elongate the time it takes to make such decisions. Nearly all virtualization management and virtualization performance management tools contain some level of capacity planning. However, VMTurbo takes everything a step further and gives you a cost base analysis of your systems and what if capability to aid in planning and decision making.

VMTurbo actually told me to deploy a new host like one of my others, which is a valid scale-out decision, but in looking at the hosts, you would notice that memory was seriously underpopulated. A new node or nodes would cost roughly $20,000, while more memory for all nodes in the cluster would cost roughly $10,000. From a cost perspective, it is far cheaper to add more memory to my nodes than to add another node. This does not including the cost of licensing, cooling, and power associated with new hardware. In the long run, a new set of systems could end up costing more.

Yet, if I were using a hyperconverged environment from a vendor, instead of built myself, I would end up needing to buy new hardware, increasing the costs for power, cooling, and licensing in order to scale out and not be in violation of support and other agreements.

In effect, you always have to look at the costs associated with making a decision as part of that decision. Costs cannot be ignored. This is the one failing I see with hyperconverged today. You are handcuffed to a specific method of growth. It is also one problem I see with cloud that Amazon and others are trying to address with daily reports on monies spent. Some thought needs to be put into predicting the costs associated with growth using an upgrade, building block, or cloud approach.

If we do not have the ability to plan for such growth, we cannot contain the costs. In my small environment, it was a very easy cost-based decision to make, as the nodes have an immense amount of headroom (we can add more cores, more memory, more storage, etc.), so an upgrade seemed reasonable. Yet, for systems without as much headroom, net new systems or cloud-based systems make quite a bit of sense. How do you plan which way to go? Are there tools to help make this decision based on cost analysis?

Do you have the tools to curtail your costs? Can you ensure growth with your current hyperconverged or converged environment? How does the cloud fit into your cost-based decisions? Do you have enough data to make those decisions?

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky aka Texiwill is an analyst, author, architect, technologist, and out of the box thinker. As an analyst, Edward looks at all things IoT, Big Data, Cloud, Security, and DevOps. As an architect, Edward creates peer-reviewed reference architectures for hybrid cloud, cloud native applications, and many other aspects of the modern business. As an author he has written about virtualization and security. As a technologist, Edward creates code prototypes for parts of those architectures. Edward is solving today's problems in an implementable fashion.
Edward Haletky

Latest posts by Edward Haletky (see all)

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz