I agree that containers will be the future of computing. However, that may not happen anytime soon. Containers have many hurdles to get over before they can take over the world. Some of these hurdles are related to politics within organizations, and others are technical. Let me discuss the technical ones.
Containers seem to be going strong within virtual environments and self-contained physical environments. The reasons are myriad, but there seem to be some issues with people saying that virtualization is dead (I do not agree), or that containers on bare metal with CoreOS, Red Hat Atomic, or some other container-built OS is the future (which is possible). Neither of these will happen unless we consider why clouds are so popular. Would a cloud give up the automation and tools it has just to go back to bare metal with containers? Perhaps, but there are some hurdles.
Hurdle #1: Multi-Tenancy
This is a huge hurdle. I have yet to hear of a Docker environment being used outside of virtual machines in a multi-tenant cloud. Why? Because Docker and Docker-like containers have no concept of tenancy. They get deployed as a container on an operating system that was designed not for multi-tenancy, but to run containers. However, we have a good grasp of multi-tenancy with networking and virtual machines. We can assign virtual machines to tenants. We could even assign hardware to tenants, but that seems more like hosting than a public cloud service. Both are possible. However, until Docker and other container software have multi-tenancy baked in, they will always need to run within other multi-tenant containers, such as VMs, VDCs, and the like.
The example I consider is 300 containers running around ten physical servers from five different companies. Are we sure those containers will not leak data between them? Are we sure they will not use the same networks without proper protections and protocols, etc.?
Hurdle #2: Security
Security is a huge hurdle for containers. Security professionals are often not even involved in container build-out and may not be involved in the build-out of the container OS, such as CoreOS or Red Hat Atomic. Instead, security is left to developers. But developers who are looking to use containers to push out product may not have the proper mindset for security. Granted, I think this is not a good behavior, given the number and types of breaches we are seeing, but developers are not normally security folks.
So, security teams need to get involved, but they are more dashboard than code driven. Tools like Twistlock address some of these issues, as does Docker Content Trust. But this is not enough. The code needs to be secure, the container needs to be secure, the network needs to be secure, etc. This does not even go into the discussion of encryption within a container or how to prevent data loss, etc. Security is per operating system today, not really per container (Twistlock addresses some of this, as does SELinux), and with hundreds if not tens of thousands of containers, security needs to be per container and fairly easy. SELinux, for example, is not often easy.
Hurdle #3: Debugging
Many say to treat your containers like cattle: if they are sick, kill them. I live in Texas, and ranchers do not kill their cattle. They quarantine, they bring in the veterinarian, they try to find out what is wrong, so that it does not impact their herd of cattle. How do you find out what is wrong within a container? Without good logging and good debugging tools, this becomes more than difficult: it becomes impossible.
What if you kill your cattle and restart them, and they have the same problem? The same security issue? The same breach? You are just postponing the inevitable. We require good logging and good analytics to help find these issues before they infect everything.
How do you easily debug a container?
Hurdle #4: Migration
Part of managing a herd of cattle is moving them between pastures. If one pasture is sick (say, with the organism that causes husk), ranchers herd that cattle to a new pasture instead of slaughtering them all and starting over. Containers need to migrate and move to other locations: locations that have different operating systems, from on-premises to clouds and between clouds. Without migration, we may end up re-staging and redeploying all the time into new clouds and locations instead of actually processing work.
Hurdle #5: Whatever as Code
Part of using a container approach is making the infrastructure, including all agents and tools, part of Infrastructure as Code. However, a container sits above the infrastructure. If we cannot deploy the underlying operating system properly, we cannot build containers. At the same time, if all we worry about is containers, then how do we build the operating system? We need a better platform into which we can deploy necessary bits as code as we deploy the containers. (This is solvable using Intigua and some other software for the OS but not yet for containers.)
In addition, we need to implement Security as Code to secure the environment, secure the container, and secure the application. We also need Testing as Code to test before deployment and to continually test after deployment. Further, we need Analytics as Code to run through all the log files, key performance indicators, and deployment results to feed back into data protection, blueprints, and other tools to ensure the investment is protected.
Analytics as Code will ensure that all blueprints contain any one-off changes that are missed by those deploying and fixing all the time. Blueprints might be ignored after deployment, but that would be a bad idea in my book. Architectures must match deployment, or mistakes are made. We also need to know how well our containers are doing, what resources we will need as usage grows, etc. This helps with future and additional deployments.
As you can see, for containers to really replace current virtualization and cloud systems, they need more tools, more capability, and most of all, more security. Multi-tenancy is crucial, as is migration. The team that develops containerized code should involve security, data protection, compliance, etc. It should break down all barriers, so that the organization can remain agile while keeping itself safe.
In the end, I feel that the marriage of virtualization and containers is far better than either alone. Why cross some of these hurdles when this has already been done well? Some still remain to be crossed, however, such as security.
Share this Article:
Latest posts by Edward Haletky (see all)
- Finding your Sensitive Data to Protect - March 27, 2017
- Scale and Engineering - March 23, 2017
- SDS and Docker: The Beginnings of a Beautiful Friendship - March 21, 2017