Container technologies are the new disruption, but in an old way. vMotion heralded the age of containers. They change the fundamental view of computing, toward heavily automated, orchestrated, and distributed systems, where high availability is not within the server and operating system, but is within the application itself. Containers themselves are not new, but how we used them has changed how we think about computing and applications.
Containers have been around, arguably, for as long as operating systems have been around. What is new is the ability to manage containers easily. To interact with them in new ways. To do that, we needed the rest of the ecosystem to catch up with our knowledge of distributed systems. Distributed systems are not new, either. We have been creating them for as long as there have been high-performance computing systems, large systems of computers, and even clusters. But containers herald a new way of thinking about distributed systems. We have moved our thinking from monolithic applications with fingers in many pies to services that can be as small or as large as they need to be. And this is not a new concept, either—so why are containers so important? Why do they qualify as disruption? Containers are a new wrapper around an older set of concepts: a set of concepts that used to be very difficult to put into practice.
Technology has caught up with our dreams of creating a fundamental set of building blocks on top of which we can put our applications and let them automatically scale out as needed. While perhaps not everyone’s goal, it is the goal of many businesses to scale business with demand. Virtualization lets me do this, but not across hypervisors or clouds very easily. Containers allow me to scale the underlying application of the business regardless of the underlying environment. We do not care if we are in Amazon or on-premises. That is the disruption of containers.
We boot up a container host running within any paradigm and place our workloads as containers within the host. Do we care where the container host is running? No. Do we care what hardware is in use? No. Do we care about underlying technology? No.
Containers disrupt by moving everything we care about as developers, engineers, architects, and businesses up to a layer where everything underneath is something we don’t need to care about. We manage the application, not the infrastructure. When we need more application, we automatically deploy it, scale it up, and run it out. If that means deploying another purpose-made container host tailored to our running paradigm, so be it. That’s not the application’s worry. Containers have simplified our application stack significantly.
Granted, there are now groups of people and organizations concentrating on providing services to the container hosts. While someone is worrying about the underlying environment, containers remove that one concern from the business’s goals and needs. Once more, they have changed how we think about computing. We are now concentrating on the application, not the infrastructure. We have moved security controls into the application. We have designed an application to work with many smaller components. In essence, containers allow us to ignore everything below the application, and to create new distributed applications with ease. This is a necessity for the modern age of Internet of Things (IoT).
Containers have changed how we think about computing by letting us ignore the underlying paradigm (public, private, or hybrid cloud; virtualization or bare-metal hosts; or any combination of virtual and physical container hosts). By abstracting underlying infrastructure, we can concentrate on the application. Actually, if we look at hardware as just another container, we can even abstract determination of functionality as just another container within our application. Need hardware acceleration for a function? Query another container for that functionality. We have now abstracted away the physical environment. We have not done away with complexity, but we have changed how we think about the next generation of applications. That is the container disruption.
We now get to worry about how quickly we can get data between our containers, and data locality to our containers, purely for performance reasons.
Share this Article:
Latest posts by Edward Haletky (see all)
- Finding your Sensitive Data to Protect - March 27, 2017
- Scale and Engineering - March 23, 2017
- SDS and Docker: The Beginnings of a Beautiful Friendship - March 21, 2017