Part III: Do Containers Change Enterprise IT?

In Part I of this series on Do Containers Change Enterprise IT, we discussed the impact of containers on security. In Part II, we discussed the impact on data protection. Now, let us discuss the impact on performance and other IT management tools. The introduction of containers to enterprise IT tends to raise more questions to ask. This will change IT processes. So far, between security and data protection, the tools used have not changed radically. However, do the tools change for performance and IT management? Do the answers to the same questions change? Will our processes change? That really depends on where the tools and processes are focused.

If your tools are focused on operating systems and virtual machines and not on the applications, then your tools, processes, and answers to the following set of questions will drastically change. Why is that? Because an application-centric management strategy places the application first and the infrastructure second. With containers, this is a necessity. In effect, when using containers, we have abstracted out the operating system and often run more than one container per operating system. However, if we run only one container per operating system, as with VMware Photon, we end up with all performance and IT management tools still working as expected, which is one reason Photon has this architecture. VMware has invested heavily in virtualizing the operating system. As such, all its tools—from vRealize Infrastructure Navigator to vRealize Operations, Orchestrator, and Automation—work within the concept of a virtual machine and, indirectly, the application, if there is only one container per virtual machine.

This goes along the same lines as what many people have been advocating for years: put one component of an application in one virtual machine, and no more than one. Why only one? It gives segmentation functionality, allows certain tools to give better information, and simplifies an environment by removing cross-component interference such as libraries, etc.

Yet, that is also what containers can do for you. However, when you think containers, you only think about the application and its immediate non–operating system dependencies. This implies that we need to reconsider how we look at performance metrics. Are operating system metrics useful when there is more than one container working? How do you know the details of any one container? Do you even care about the details of any one container, or do you look at the application and the services it uses?

If you look at the services the application uses, are you approaching this via the method New Relic does, with its code and database call approach to monitoring? Or do you rely on external tools such as ExtraHop? Or should you only concentrate on APIs in use—of which SQL is one—and employ tools like VividCortex or SmartBear ServiceV Pro? Do you need all these tools to get a full picture of your environment? Then you really need to consider single-pane-of-glass solutions.

Whether you are currently looking at the wire, the application, the API, the database, or the operating system, you will need to rethink all this for containers. Specifically, you need to rethink how all the agents used to do performance management are inserted into the operating system used by the container, or within each container itself, if possible. When you consider everything that happens within containers, you still cannot forget the underlying substrates of IT that still allow management. Can these substrates be abstracted so that there is a supercontainer that contains the necessary bits?

Containers and the Substrates of IT

Click to Expand

The issue in many cases is not what you can stick into each container, but how do you get per-container data out of the tools you already use that either do not understand containers yet or will never understand containers. Some tools, such as New Relic, VividCortex, and SmartBear ServiceV Pro, will fit quite well into containers, but others, such as VMTurbo, Zenoss, and Virtual Instruments, currently do not. That will change over time, but this does not imply such tools are not useful. You just have to rethink your container deployments to ensure your current performance management tools report useful data while you wait for them to grow into a containerized world.

This is one reason why VMware’s Photon platform is intriguing. It bridges the gap between yesterday and tomorrow. Can we just jump to containers? If so, how will all the performance management and IT management tools jump as well? Can they today? Do organizations lose abilities that are just not there today?

Granted, if you are a developer, you want to use the latest, but if you are in operations, you need to use what is possible. DevOps requires these teams to talk to each other so that the right tools, the right architecture, and the right management is used to satisfy all their needs today, while allowing for the change and growth of the tools into the future.

Where are you on this path? How will your tools need to change?

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky, aka Texiwill, is an analyst, author, architect, technologist, and out-of-the-box thinker. As an analyst, Edward looks at all things IoT, big data, cloud, security, and DevOps. As an author, he has written about virtualization and security. As an architect, Edward creates peer-reviewed reference architectures for hybrid cloud, cloud-native applications, and many other aspects of the modern business. As a technologist, Edward creates code prototypes for parts of those architectures. Edward is solving today's problems in an implementable fashion.
Edward Haletky

Latest posts by Edward Haletky (see all)

Related Posts:

Leave a Reply

Be the First to Comment!