Container Security Podcast

There is a recent CVE (CVE-2016-9962) that directly affects container security. A patch was quickly forthcoming. This raised some interesting concerns. Specifically, how do you patch a container infrastructure? What needs to be patched? The “what” is easy; the “how” is more difficult. As we move to cloud-native applications, where we tear down apps rapidly and restart them from whole cloth, patching is a crucial issue. There is risk here; the question is how to mitigate such risk. How do you patch for future issues? This was the subject of the virtualization and cloud security podcast this week.

There are two approaches to patching containers, container hosts, and services.

  1. Patch the container or container host directly, employing methods used for other operating systems and applications. However, this often leads to configuration drift and possibly redeployment of the same vulnerability already patched.
  2. Patch the repositories or pick up from patched repositories that make up the container host and container bits.This often requires more work, but it will ensure your new containers contain the necessary bits to avoid the need to patch for old vulnerabilities.

The first option is a continuation of what we do today for most operating systems and applications (or pets). The second option is specific to cloud-native applications and container-based applications (or cattle). The pets vs. cattle debate has an impact on security, patching, and risk as well. How much of an impact depends on your organization.

Let us look at the two ways of patching against a base container-based system used within many clouds today. Yes, containers today within AWS, Azure, and other clouds run in virtualized container hosts. Yes, VMs are involved. From a security perspective, that is actually a good thing. Please listen to the podcast for more on that subject.

Normally, to deploy containers, we either recreate the container host and then put containers within the host, or we just deploy containers to existing container hosts. These methods each require a build server. The build server pulls data from both code and artifact repositories. The build happens, creating a new artifact or container. Then, using Infrastructure as Code automation mechanisms, the container is deployed in a well-known way every time. Automation rules this particular approach.

Common Container Deployments
Figure 1: Common Container Deployments

In Figure 2, we see the approaches to patching. Option 1 is to patch the container host and the containers just as we would normally patch any operating system or application. This requires that Patching Option 2 be employed as well to ensure that the patch is brought forward with the next deployment of the container host or the container itself. Patching Option 2 requires that the artifact and code repositories are also patched. This can occur either through your control, or by ensuring that the artifact repository (such as Docker Hub) from which you are pulling pieces includes the patched pieces.

Container Security Patching Options
Figure 2: Container Security Patching Options

Figure 3 adds a new server to the mix: a policy server. The policy server takes into account cost, vulnerability, threats, and other data (configuration, test results, etc.) to determine exactly how the build server will create the new container or container hosts.

Container Security Policy Server
Figure 3: Container Security Policy Server

The latter is where tools like Twistlock, Aqua, and others fit into the picture. They may or may not control the actual build, but they do apply controls to containers and container hosts. For the CVE in question, there are two mitigations. The first is to patch the container host. The second is to use mandatory access controls for all items outside the container. In other words, employ SELinux in enabled mode. SELinux comes with all Linux distributions, but frequently it is set in permissive mode and ignored. However, use of container security tools often ensures that SELinux is enabled and managed correctly.

Use of mandatory access controls with whitelisting will provide remediation for today and into the future. Patching needs to look at not only today, but also tomorrow. How can we remediate other such attacks in the future?

Let us know your thoughts, and have a listen to the podcast.

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky, aka Texiwill, is an analyst, author, architect, technologist, and out-of-the-box thinker. As an analyst, Edward looks at all things IoT, big data, cloud, security, and DevOps. As an author, he has written about virtualization and security. As an architect, Edward creates peer-reviewed reference architectures for hybrid cloud, cloud-native applications, and many other aspects of the modern business. As a technologist, Edward creates code prototypes for parts of those architectures. Edward is solving today's problems in an implementable fashion.

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz