On the third Virtualization Security Podcast of 2011 we were joined by Charlton Barreto of Intel to further discuss the possibility of using TPM/TXT to enhance security within the virtual and cloud environments. We are not there yet, but we discussed in depth the issues with bringing hardware based integrity and confidentiality up further into the virtualized layers of the cloud. TPM and TXT currently provide the following per host security:
- The Trusted Platform Module (TPM) provides a way to store the checksum of a boot volume to ensure that the boot volume of a host is the proper volume from which to boot. TPM can also store biometric data associated with fingerprint scanners. In essence, TPM stores data for later comparison.
- The TPM Extensions or TXT, increases the amount of data stored within the TPM to include a checksum associated with the kernel to be booted from the trusted volume.
- TXT also allows the storage of application checksums to be run ontop of the trusted kernel.
This all means that TXT can ensure that the vmware-vmx application has not been modified, and will be executed by a trusted kernel which also has not been modified which has been launched from a trusted volume. How does this help us with security? In essence, we can have attestation, that what we launched was what we desired to launch all the way through the boot process of a given host. It is a defense against blue-pill attacks as well attacks against critical applications within the environment such as vmware-vmx (which is the object in which a guest operating system runs).
Unfortunately, this attestation stops short of the guest operating system. We cannot currently tell if the guest operating system is really the one we wanted to launch within the VM object, only that the VM object code has not been modifed. These are very important aspects of tying hardware trust to virtual machines. But it currently falls short of where we actually want to be, that is the tying hardware trust all the way through to the application running within the guest operating system which runs within trusted VM object. We have to bring this trust model up at least two more layers.
On top of this, on of the biggest issues with the cloud is jurisdiction of where the data within a VM object lives.Â Which geo-location does the VM object reside or can reside. The fear is that a VM object that legally can only live within the EU will end up within the US where different privacy laws apply. To combat this, TXT has one other feature that was demoed at RSA Conference 2011. That feature is a geo-location tag that can be associated within a given TPM/TXT device that resides within each core. Currently, this geo-location tag has to be set by hand, but it could be set via a trusted GPS or network device as well. Once more applying hardware trust to solve an all important problem within the cloud.
There are several issues associated with the current TPM/TXT model:
- Attestation is done only once, at boot and not continuously. So if someone hacked a running system, the hardware attestation does not reinspect the running VM object (vmware-vmx), or the running kernel, nor the current in use boot volume.
- Attestation does not currently understand the concept of Live Migration or vMotion of VMs.
- Attestation does not currently include a VMs configuration file, but only the actual vmware-vmx code.
- Attestation does not work outside a single host.
In essence, TPM/TXT is currently limited to just the host on which the VM object runs and does not currently understand the myriad of files that actually make up a running VM (just the single binary named vmware-vmx). Nor is there the ability to do continual monitoring of the in memory footprint of the kernel or vmware-vmx binaries to ensure they have not changed. Virtualization hosts are rebooted so infrequently. Yes I can have a trusted launch of a hypervisor, but how can I maintain that the host has not be compromised after the launch? TPM/TXT also does not handle clusters of hosts yet, where VM objects can move around the cluster at will.
Charlton theorized that a separate peer (to a CPU core) device would be required to maintain the TPM/TXT registers across the host by maintaining critical data about each TPM/TXT device that lives within each core of the cluster. This peer device would in essence perform key management out of band from the traditional TPM/TXT trust of the hardware. One such device could be an hardware security module (HSM) used to hold critical certificate data under lock and key. Such a peer device would be crucial to be able to verify virtual machine integrity as it moves around the virtual environment (from host to host).
In addition, such a peer device could be expanded to maintain key information for the encryption of memory by the Intel processors making use of the AES&I instructions that speed up AES based encryption tools.Â In essence, keys would not be stored within VM memory, but within the hypervisor memory ensuring confidentiality and integrity of data within the VM object and therefore the guest operating system and application.
Key management is not the issue, what is the issue is how to expand this functionality to the cloud. Within a single virtual environment such functionality could work as long as the peer device could hold data for many 1000s of nodes. But once you start talking about replication sites, and moving workloads into the cloud a per host peer device may need to hold millions of keys without requiring a reboot of a node. Within increased movement of data into the cloud, such peer devices would need more and more complexity to address all the issues with movement of VM objects to and from a cloud service provider. Actually, geo-location would need to be checked before such movement as well.
Currently TPM/TXT is limited to just a single host, but that is the starting point. We have a long way to go to bring trusted launch up into the VM as well as ensuring attestation during runtime of the VM at all levels. In addition, we need to expand TPM/TXT to handle massive amounts of data as VMs are moved from host to host, host to cloud, or even cloud to cloud while maintaining the hardware root of trust.