In this ever-changing world of IT, the legacy of today was once the future of yesterday: namely, hypervisors. Hypervisors are now considered legacy, even though they are seriously underutilized due to issues with fear, uncertainty, and doubt around using these resources to their fullest. The new technology is containers. However, where are the operational tools to support containers? Where are the procedures? Where are the developers who understand distributed systems? We are moving toward containers at lightning speed without answers to those questions and many more. To move to containers today, we need a strategy.
We all try to do it, we sometimes succeed, but the increased density of workloads escapes many folks, whether they are in a cloud or using an on-premises virtual environment. Are there ways to help us gain more density within our environments? Is it still fear that keeps us from doing so? Are there real issues we still need to solve? Why are most environments running with CPU to spare? Is there still a fear of running too many things on any one system?
Containers and other technologies are moving administrators, developers, and even operational folks up the stack. In other words, we have abstracted out the hardware and abstracted out the operating system; next, we will abstract out middleware and eventually everything but the code to run. However, when we do that, we no longer train people to be systems engineers; we no longer have the ability to do root cause analysis. We have seen this many times in recent years, and it may just get worse. Root cause analysis is part knowledge and part tools, but most of all an understanding of the system underneath the code. We are fast approaching a time when this skill may become a lost art.
In Part I of this series on Do Containers Change Enterprise IT, we discussed the impact of containers on security. In Part II, we discussed the impact on data protection. Now, let us discuss the impact on performance and other IT management tools. The introduction of containers to enterprise IT tends to raise more questions to ask. This will change IT processes. So far, between security and data protection, the tools used have not changed radically. However, do the tools change for performance and IT management? Do the answers to the same questions change? Will our processes change? That really depends on where the tools and processes are focused.
At VMworld last week walking the solution exchange, apart from getting very sore feet, I reintroduced myself to the near-line storage acceleration and flash-cache vendors: companies like Infinio and PernixData, and even HCI vendors like SimpliVity, Pivot3, etc.
We all need performance and capacity management tools to fine tune our virtual and cloud environments, but we need them to do more than just tell us there may be problems. Instead, we need them to find root causes for problems, whether those problems are related to code, infrastructure, or security. The new brand of applications, if designed for the cloud à la Netflix, or older technologies instantiated within the cloud need more in order to tell us about their health. Into this breach comes a new set of tools, as well as an existing set of tools.