There has long been a debate about testing products within a virtual environment. Not just on how, but the why as well as the what to test. There are limits in some EULA’s as well on the reporting of such testing. This was the subject of the 7/25 Virtualization Security Podcast (#112 – Virtualization Security Roundtable) held Live from NSS Labs in Austin, TX. Where we delved into the issues of testing within a virtual environment. While the discussion was about security products, it is fairly straight forward to apply the concepts to other products within the virtual environment.First we need a testing procedure that is well thought out, documented, and supported by others. Take Virtual Desktop Testing or general Hypervisor Testing, there are certain tools associated with testing of these environments that are accepted by all as ‘defacto’ standards. Use of these tools aids in understanding the test which helps in reporting and acceptance of those reports. Even so there are several items to consider:
- Environment – We need to be cognizant of the environment in which the test was run. Were the testing tools run as workloads on the systems to be tested? Or external? If on the system, what is there overhead? Not only that, since many of these are tests run over a network (desktop, firewall, IPS, IDS, etc.) the type of network involved comes into play so we need to understand network latency, bandwidth, burst rates, etc. A well defined environment for all tests is a must.
- Procedures – We also need a well defined, repeatable set of procedures. It is best to have a fully automated procedure so that the human element is removed from the equation. The automated procedures should include any baseline as well as load generating tests.
With these two items covered either by documentation or fully automated systems (perhaps even the creation of a software defined data center, SDDC, for use by testing), we can proceed with our tests. As the tests proceeds the environment and procedures will change until a final environment and procedure arises. The key to both of these is to make the environment as real world as possible and to have a repeatable process. Repeatable by anyone not just your own test lab. Which also implies the use of well known tools to provide the procedure. Those tools include:
- LoginVSI – LoginVSI is the defacto standard for benchmarking virtual desktop environments with its ability to automate the access to VMware Horizon View, XenDesktop, and other systems over well known protocols such as RDP (and others). In addition, while it comes stock with many existing workloads, you can easily create more workloads that run within each desktop. If you wanted for example, to determine the best settings for graphically intensive applications, it would be good to add those applications into the mix. LoginVSI has been used to test end point security tools, graphics, virtual desktop density, flash and other types of storage, and many other aspects of a virtual desktop environment. ProjectVRC has many reports on these types of tests.
- SPECvirt – SPECvirt is the standard for benchmarking virtual environments and has recently been updated to the 2013 version which includes several application tools. SPECvirt runs different tiles (workloads) across a single or cluster of hypervisors. Since all workloads on a hypervisor are unique, having a standard set of tools based on a standard gives us some measure of success. A predecessor to SPECvirt was the VMware VMmark (which also still exists).
However, these tools are not the end all, but more the beginning of testing within your virtual environment. To be a believable test however, the test must be independent, well thought out and repeatable. For security testing within the virtual environment we need to look further into what is required. Many numbers are just feeds and speeds but before we can find those numbers we need to find what is normal, so that we can further our tasks to determine the additional over head of those tools. Part of what is normal will be latency of network, storage, etc. So this implies that test labs should mimic production labs exactly as hardware makes a difference, where your data moves around the environment also makes a difference. The key is repeatability, so if we can repeat our process and recreate our environment (which includes hardware and how the data moves around the environment) we can provide a beneficial and believable result.
How do you test changes to your virtual environments?