A Tale of Two Clouds

Recently I have had the pleasure of discussing security with a number of cloud providers. Specifically, we talked about what security they implement and how they inform their tenants of security-related issues. In other words, do they provide transparency? I have come to an early conclusion that there are two types of clouds out there: those that provide additional security measures and work with their tenants to improve security, and those who do not. On the Virtualization Security podcast we have discussed this many times, with the conclusion being drawn that many clouds do a better job at security than the average organization does, but that there is no way to know what is implemented, as there is no transparency.

Without some level of transparency, the tenant needs to implement its own security, and doing so implies adding more and more security systems into the cloud tenancy. This will chew through available resources, impact overall performance, and generally upset the users (depending on how security was implemented, of course). Given that security appliances tend to be CPU and network hungry, the tenant could become a noisy neighbor. All this, because the cloud provider did not provide transparency. If the tenant does not know what is available, in scope for audits, then it has to duplicate what is already within the cloud.

Here are some basic items that I feel should be provided by every cloud provider to improve overall transparency:

Encryption: Where and how does the cloud encrypt? What is the scope of this encryption within the stack: self-encrypting drives, encrypting fabric/network switches, encrypting virtual storage appliances, or ways of encrypting within the virtual machine? What is actually available? I should know this before I use a cloud service. I would look for implementations that use Vormetric, SafeNet, AFORE, or HyTrust’s HighCloud Security.

Audits: The cloud provider claims to pass PCI, HIPAA, and other compliance audits. I should, minimally, know the scope of these audits. In fact, they should appear on any dashboard, letting the tenant know whether its virtual machines, the systems they run on, and the networks they use were in scope for such audits. If they were not in scope, then the tenant needs to do its own audits. This is a major failure in transparency and actually creates a huge amount of work for many clouds. One question to ask is whether the cloud provider participates in CloudAudit or at least provides the output of the Cloud Controls Matrix from Cloud Security Alliance. But the most important item is to provide the SCOPE information of any audit.

Geo-location: Many clouds span country or jurisdictional borders. Given this fact, tenants need to be able to restrict movement between jurisdictions as needed. However, the cloud provider, if using Intel TXT to provide geo-location data, must also explain how it decommissions hardware and removes such geo-location information. Why is this necessary? Because if the hardware ends up in another country as a secondary market sale, it will still contain the old geo-location codes, which can be used to fool systems. This raises the issues of how to inform the tenant or host of device decommission and how to let them know that the proper procedures were followed.

Administrative Access: Each cloud has its own administrators. Each tenant also has its own administrators. The key is that if an administrator touches a virtual machine, a datastore, a host running a tenant’s virtual machine, or a network that includes a tenant, the tenant should be informed. Furthermore, no such access should be allowed unless there is a trouble ticket (from the tenant, another tenant, or the cloud provider) involved. These events should be logged and made available to each involved tenant in some automatic fashion. This would provide a major improvement in transparency and build trust that everything is being handled appropriately. Unfortunately, this is also one of the hardest things to do, as hypervisor logging tools do not provide a way to associate a log entry with a ticket ID, or even with the appropriate user, due to the delegate user problem. Tools like HyTrust work quite well, but not everyone runs vSphere-based clouds; others are building their own platforms. The distribution of this information to appropriate parties is a necessary operational change for all clouds.

Endpoint Security: All clouds make use of various other tools to provide endpoint security, but they may or may not make them available to their tenants in some fashion, which implies that tenants must implement these tools for themselves. The cloud provider can negotiate with the endpoint security vendors on behalf of their tenants, rent out the services as needed and, if on the proper hypervisors, tie into the underlying layers to improve overall endpoint security techniques. Unfortunately, this global approach requires improvements in endpoint security vendors’ secure, multi-tenancy implementations. For Hyper-V based clouds, 5nine Software is worth looking into. For vSphere environments, Trend Micro and some others are leading the way. Symantec provides a more generalized, network-based approach that will work within any cloud.

Output to Remote Log Facility: The ability to take all log information and run it through some form of big data platform to provide behavioral analysis and to detect unknown unknowns is becoming quite important as the number of applications, systems, hypervisors, administrators, and potential threats rise. Any log that has tenant data or the results of tenant data should be made available to the tenant so that it can analyze the data itself (as well as the cloud provider) and can tie these underlying logs to its own application logs to get a better picture of what is going on. The tenant may then use its own tools, Splunk, Loggly, Prelert, and other remote log platforms, to find threats faster, correlate data better, and understand its tenancy more thoroughly. Unfortunately, without a way to tag underlying log information for a tenant (or even a user), it is quite hard to send such data to the tenants. All data will most likely be missing until the delegate user problem is solved.

There are some clouds that provide many of these facilities, such as Virtustream, but others for which we just do not know what is available, such as Amazon. There are clouds that use their economy of scale to negotiate better prices from security vendors and many who do not or, more to the point, we just do not know. If we do not know, then as tenants we may have to duplicate many efforts. At the very least, security information should be readily available on any cloud dashboard.

How does your cloud stack up? What would you change about it?

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky, aka Texiwill, is an analyst, author, architect, technologist, and out-of-the-box thinker. As an analyst, Edward looks at all things IoT, big data, cloud, security, and DevOps. As an author, he has written about virtualization and security. As an architect, Edward creates peer-reviewed reference architectures for hybrid cloud, cloud-native applications, and many other aspects of the modern business. As a technologist, Edward creates code prototypes for parts of those architectures. Edward is solving today's problems in an implementable fashion.
Edward Haletky

Latest posts by Edward Haletky (see all)

Related Posts:

Leave a Reply

Be the First to Comment!