VirtualizationSecurity

Supply Chain Security

VirtualizationSecurity

The recent spate of news out of Home Depot and, further back, Target point to the need for better supply chain security. But really, how can we address the issue? There are several answers, but none of them seem feasible in today’s IT environments. Why? They all require open communication, constructive criticism, and willingness to work toward a solution. However, what we find is that many IT organizations feel that anyone outside their immediate organization is suspect, security is the enemy, audit is also their enemy, and developers know all. 

These attitudes and assumptions have got to stop. The enemy is not inside the organization: it is outside, and it is willing to take advantage of naivete, bad practices, and the aforementioned lack of communication to perpetrate its criminal activities. But to think this is the whole picture presumes that if everyone in the organization were working together, the attacks would not have been possible. Yet, with supply chain attacks, this is not always the case: the supply is usually outside an organization’s control. Nevertheless, several solutions based on items within every organization’s control are possible. These solutions include:

  • Code and Compiled Binary Checksums: Require third parties to provide you with checksums from various stages of their development path for all code and binaries delivered to you. This will force them to look harder at binaries that change as they move from development to test, QA, and GA. When code is developed, what is tested at each phase of development should be the same things tested at every previous stage. If tests are different, this is a time for further review.
  • Code Reviews: If you get a third-party code into your environment, it behooves security to team up with development and perform code reviews. This is often a time-consuming process, but it is necessary to ensure the code you are using is doing exactly what you desire and not something more.
  • Third-Party Code Reviews: These currently tend to take a bit longer, as in the case of common criteria reviews, but they should be done for any critical system that touches personal identifiable information (PII). Perhaps this is a new avenue that the Payment Card Industry (PCI) Security Standards Council can suggest. Since all these attacks are against payment card devices, there should be some third party that is faster than the common criteria to verify the veracity of PCI-related software.
  • Containerization: Any code that touches PII should be containerized to prevent the exfiltration of data to unknown locations. The only known locations would be those to which the PII would normally go, such as an encryption engine that would then send the data to a database. Everything else would be disallowed. Symantec Data Center Security (formerly Critical System Protection) could be used for this, as could SELinux and any other server-side mandatory access control mechanism from within the devices in use. Access to any unauthorized network port, system file or directory, or memory segment would be denied outright.
  • Network Microsegmentation: This is a form of containerization that pertains specifically to the network instead of all the other subsystems. If our microsegmentation control can be application and data-centric and respond quickly to any changes, it is possible to provide network security outside of the devices in use. For PCI and applications using PII, these rules would be fairly static until an application changed, but those changes should be well understood by those doing code reviews, etc.

Supply Chain Security: Trust but Verify, While Providing Compensating Controls

When you receive code developed outside your controls and outside your code review process, it is best not only to “trust but verify” (hence the need for third-party or local code reviews), but also to provide some form of compensating control. The Verizon Data Breach Investigations Report and other reports point out that unknown unknowns are still a cause of breaches. Unknown unknowns could be due to unknown systems, unknown code, or unknown networking.

Code in any form received by your supply chain falls under the category of “unknown unknown.” A device or binary is a black box, and the security of that black box is seen as existing outside an organization’s purview. But it is not, in fact, doing so. It is absolutely within an organization’s purview, just as placing a system within the cloud is within the organization’s purview and not the cloud provider’s. If you cannot find anything about the code, then negotiate a third-party code review and, regardless of the results, apply a compensating control in the form of mandatory access controls–based containerization and microsegmentation.

Put the KNOW in in-KNOW-vation

Security is often seen as putting the “no” in in-NO-vation. I instead want security to put the KNOW in in-KNOW-vation. Security professionals should trust but verify and apply compensating controls regardless of the current state of verification. We need to layer defenses, but that starts with gaining a complete understanding of the environment, based on working with developers, testers, and supply chain companies to verify what is delivered. Security professionals should no longer say “no” but get involved as the trusted advisor.

When we are dealing with the supply chain, we need to verify the supply chain first but also to build compensating controls to prevent exfiltration of critical data, specifically data related to PCI and PII. That cannot happen effectively unless security and audit teams know about the environments in extreme detail. Further, there must be a way to break down both the silos that exist and the artificial barriers formed by those who think they “know all” or that this is “not my problem.” Both syndromes are equally damaging to the organization.

We start with communication; then, we design the appropriate compensating controls. How do you protect PII? Do you have compensating controls in place? Is your security team seen as effective at all levels?

Tyler Britten (@vmtyler) remarked on twitter:

having a security team is like having a DevOps team…

I would instead say a security team needs to be involved in all stages of development and operations, following agile and DevOps principles. Security should be working with those teams as an adviser, developer, reviewer, and trainer to raise the level of security awareness. For security to be effective, it needs to be involved at every layer of the organization. Security’s first step is to remove the word “no” from its dictionary while maintaining the appropriate compensating controls, which takes KNOW-ledge.

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky aka Texiwill is an analyst, author, architect, technologist, and out of the box thinker. As an analyst, Edward looks at all things IoT, Big Data, Cloud, Security, and DevOps. As an architect, Edward creates peer-reviewed reference architectures for hybrid cloud, cloud native applications, and many other aspects of the modern business. As an author he has written about virtualization and security. As a technologist, Edward creates code prototypes for parts of those architectures. Edward is solving today's problems in an implementable fashion.
Edward Haletky

Latest posts by Edward Haletky (see all)

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz