Selling Containers Is Hard


How do you distribute an application that uses containers? This seems to be an odd question. Container-based applications are usually associated with Software as a Service (SaaS) applications and public cloud deployment. However, there is still a place for software that is purchased and installed on-premises in a data center. If the software is in the form of containers that will run inside the customer’s data center, then how will the software be deployed and managed? How will scaling work, and how will updates be deployed?

Cloud Containers

The usual deployment model for container-based applications is into a platform controlled by the software developer. When Google’s developers deploy containers, they do so on their cloud platform. For Google, this is an on-premises deployment of software onto the infrastructure that it owns. This same model fits for a company that builds its own in-house application using containers. The company will, hopefully, decide on a single container platform, and developers will build for that platform. The software developers are required to fit their methods to the one chosen container platform. Using a platform like Mesos, Kubernetes, or Docker Swarm, the company manages resources and container instances. These are the platforms that are designed to deploy and scale container-based applications. Their whole reason for existing is to make deploying containers easy at scale. The collection of tools and the workflow are prescribed and defined to suit the company that is developing software using containers. These tools and workflow usually include an internal source repository like Git and a continuous integration/continuous deployment (CI/CD) framework like Jenkins to manage version release. There is a lot of infrastructure surrounding the container-based application: it does not stand alone. So, where does that infrastructure come from when the containers run on a customer’s premises?

Container Platform

One option would be to use the customer’s own container platform. Customers would need to deploy Kubernetes, Docker Swarm, or Mesos. Then, the software developer would make sure its application could use any customer’s chosen container platform. The software vendor would want control of its application, distributing just a container image. The customer’s container platform would need to access the vendor’s image repository. What about the integration between the vendor’s release management and the customer’s change management? How about processes for scaling up and down for both performance and version updates? Further, the vendor would need to support multiple container management products. It cannot dictate what each customer uses. This doesn’t sound like a desirable model for the customer or the vendor.

Physical Appliance

One option I have heard of is software wrapped in hardware. This is the physical appliance model. Customers buy a scale-out hardware appliance that runs the payload software. The software then runs in containers spread over the cluster of appliances and using cluster management built into the appliances. Customers will only accept this model if the hardware has an intrinsic reason to be part of the solution. One example is the Rubrik backup product. The physical appliance is a scale-out secondary storage solution. The storage software that runs on each node is developed with a containerized architecture. With this deployment, customers do not need to know or care that the software is in containers. They just buy the physical node capacity they need. All the customer sees is a group of physical appliances that deliver a service.

Virtual Appliance

A second option is a virtual form of the same thing. The software deploys as a virtual appliance that runs just the containers for an application. If a single VM won’t provide enough resources for the application, then a group of identical virtual appliances must be deployed. Ideally, the application inside the appliances would manage its own scaling. One option is simple performance reporting, to let the administrator know when to deploy another appliance. Another is connectivity to the virtualization platform to enable autoscaling of the appliances. The customer still doesn’t care that the application is built from containers. All the customer sees is a group of virtual machines that deliver a service.

You may already have container-based applications in your data center. Whether they are wrapped in sheet metal or virtual appliances, you probably don’t need to care that the software is in containers. I do wonder whether we will see a future in which enterprises have container management platforms as a standard part of their data center.

Share this Article:

The following two tabs change content below.
Alastair Cooke
Alastair Cooke is an independent analyst and consultant working with virtualization and datacenter technologies. Alastair spent eight years delivering training for HP and VMware as well as providing implementation services for their technologies. Alastair is able to create a storied communication that helps partners and customers understand complex technologies. Alastair is known in the VMware community for contributions to the vBrownBag podcast and for the AutoLab, which automates the deployment of a nested vSphere training lab.
Alastair Cooke

Latest posts by Alastair Cooke (see all)

Related Posts:

Leave a Reply

Be the First to Comment!