agilecloud

Container Efficiency

agilecloud

Many parallels are drawn between containers and VMs. At the same time, container evangelists are often quick to position containers as a replacement for virtualization. As is always the case, the new is different from the old but can learn from the past. Containers and VMs address different part of IT/IS, so they are not directly competing. Containers do not always remove the need for VMs, nor are containers always deployed where VMs could be used. Container deployments are likely to suffer some of the same issues that VMs experience with enterprise customers.

Overhead

A frequently cited benefit of containers is greater resource efficiency than VMs. This is usually the rationale for running container hosts on bare metal, rather than inside VMs. On bare metal, there is only one OS instance and its overhead per physical server. Using VMs, there is a separate OS overhead for each VM, plus the hypervisor overhead. VMs imply multiple overheads per physical server. The reasoning goes that by having less overhead, the application will get a greater percentage of the purchased resources. This then reduces the cost of running the application. In cloud-scale deployments, this is definitely true. A single containerized application may need to be run in thousands of instances on hundreds of physical hosts. It is on these massive scales that containers are most at home and provide the greatest benefit.

Isolation

At smaller enterprise scale, there are some other limits that affect the efficiency of bare-metal containerization. An enterprise organization may only need to run thirty instances of each containerized application. One of the top security issues with containers is that there is relatively little security isolation between containers on the same host. If each of these container applications must be isolated, as is common, then having dedicated clusters of container hosts for each application is essential. If these container hosts are physical, then there is a big efficiency issue. Lots of little clusters of physical servers means a lot of spare, wasted resources to cope with load spikes. The isolation between VMs is far greater than between containers on the same host. But VMs can share the same physical servers. A single large hypervisor cluster can host multiple smaller container host clusters. A larger physical cluster means fewer spare, wasted resources for spikes. If there are multiple applications requiring isolation, then VMs make a lot of sense for hosting containers. VMware has also been talking a lot about its Instant Clone feature, which removes much of the VM overhead for running containers inside VMs. Until container platforms provide better isolation, VMs with containers inside are a real use for enterprise customers.

Another essential part is isolated networking between the container instances on separate container hosts. This is exactly why Docker acquired SocketPlane. We see the fruit of this acquisition in the multihost networking support in Docker v1.9. Using VXLAN, Docker containers can share a private network even when they reside on separate physical hosts. This is one of the areas of isolation that are essential in enabling multiple containerized applications to share the same physical hosts.

Zombies

One of the dirty secrets of many virtualization implementations is zombie VMs. These are VMs that were deployed for some purpose but not removed when the purpose passed. The VMs live on forever, consuming resources without delivering value. Data center zombies reduce the efficiency of converting data center spend into business value. Data center zombies are nothing new. In the previrtualization era, there were zombie servers that lived in data centers. They consumed resources and delivered no business value. Virtualization simply made the problem worse by allowing zombies to be created in seconds rather than requiring weeks for physical hardware delivery. Zombies are really a symptom of poor business practices in IT. In a well-run IT department, zombies are decommissioned when the need goes away. I suspect that zombie containers will also be a thing, particularly as container automation and clustering develops. In a poorly managed container environment, there will be cases of hundreds of container instances being deployed and their purpose subsequently lost. Data center zombies are a result of poor governance, not of a particular technology.

Unfortunately, there are few technologies that can be implemented in exactly the same way for every customer. Containers in the enterprise will be deployed differently from containers in cloud providers. At the same time, it is important to remember that the more things change, the more they stay the same. Technology change does not cure people problems. A poorly managed environment with containers can still be poorly managed and fail to achieve business benefits.

Share this Article:

The following two tabs change content below.
Alastair Cooke
Alastair Cooke is an independent analyst and consultant working with virtualization and datacenter technologies. Alastair spent eight years delivering training for HP and VMware as well as providing implementation services for their technologies. Alastair is able to create a storied communication that helps partners and customers understand complex technologies. Alastair is known in the VMware community for contributions to the vBrownBag podcast and for the AutoLab, which automates the deployment of a nested vSphere training lab.

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz