VMware, Microsoft, Red Hat, and Citrix have all successfully virtualized CPU and memory in their respective hypervisors. VMware is building a Software Defined Data Center with the intention of virtualizing networking and storage as well. Perhaps it is time to take a step back and think about what exactly it means to virtualize these various resources, and what benefits come from the virtualization of each one.In particular we will focus upon how the benefits of virtualizing networking and storage differ from the benefits of virtualizing CPU and memory.
If you step back and think about it, VMware was not the first vendor to virtualize the CPU. If you define virtualizing the CPU as abstracting the physical CPU from the workloads that used the CPU, this was done by modern operating systems long ago. Workloads consisted of threads and processes, and it was up to the operating system to map those threads and processes into the CPU for execution.
The problem that VMware addressed with CPU virtualization was that it turned out that it was pretty hard to run multiple applications of different types on one instance of an operating system. The main reason that it was hard was that each application carried with it a set of dependencies upon the operating system that often involved certain combinations of certain versions of operating system and middleware components. This led to the well known problem of “DLL hell” in Windows and led most customers to only run one Windows server application on each instance of a Windows server, which then ran on its own physical server. This lead to many underutilized physical servers from a CPU utilization standpoint, which is where CPU virtualization from VMware became so valuable. By allowing multiple instances of operating systems (virtual machines or guests) to run on one physical server, VMware was able to deliver dramatic benefits in terms of server consolidation by driving up CPU utilization.
Note that the ability to deliver the benefit of server consolidation was contingent upon the ability of the hypervisor to share the CPU between workloads who were unaware of the fact that they were now sharing the CPU. This is a crucial aspect of CPU virtualization that is not necessarily present in other forms of virtualization. as we will see below.
Whereas with CPU virtualization VMware was able to cause sharing of the CPU via time-slicing, sharing of memory via virtualization is possible only to a much more limited extent. If an application needs 2GB of physical memory, even though it is allocated 2GB of virtual memory, the corresponding physical memory had better exist back on that physical server or else very bad things are going to happen in to the performance of that application (paging of memory to disk). Virtualization does enable some sharing of memory due to a feature in VMware called Transparent Page Sharing. This feature allows the hypervisor to identify common pages that are read-only in each operating system image (called code pages) and keep just one copy of those pages in memory. For example, most of the memory used by the Windows operating system itself is code pages, and therefore the vSphere hypervisor can effectively keep one copy of the Windows OS in memory to serve N guest images running that OS. This is why in a VDI implementation you can often allocate much less memory to the VDI image of a desktop than the physical desktop had – because you no longer need memory for every copy of Windows, just for one copy of that version of Windows.
One key point here is that while time-slicing of the CPU is something that the hypervisor was able to accomplish, time-slicing of memory is not possible. Therefore, unlike with CPU resources where time-slicing can create the perception that N things are all using something at the same time, there is no way for two applications to use the same chunk of memory at the same time. This quickly led to a situation where as CPU speeds continued to improve, the density of VM’s on a server became memory bound – bound by the amount of physical memory on the server.
With the history of virtualization behind us, let us focus upon the future. One of the key questions that faces us is, exactly how will network virtualization work, and therefore exactly what will its benefits be? The likely answer is this:
- Network virtualization will move the configuration of virtual networks out of the hardware switches into the hypervisor platform. This will make it easy for the virtualization administrator to create and change virtual networks. It will make it easy for network configurations to follow virtual machines. It will make it easy for cloud management software solutions to create the virtual networks required to support workloads automatically instantiated through service catalogs. So this benefit is very much like the benefit of being able to easily configure virtual CPU’s and virtual memory in the hypervisor. Great, but alone nothing that drives the kind of hard dollar ROI that CPU sharing was able to deliver via server consolidation.
- But network virtualization does something that neither CPU virtualization nor memory virtualization did. It moves part of the responsibility for doing the actual work out of the networking hardware into the virtualized networking software. VMware has had a vSwitch implemented in software for quite some time. Microsoft has one in Hyper-V as well. If two VMs running on the same host need to talk to each other, then that network traffic never hits the NIC or the physical switch. If you put the web servers, application servers, and database server for an entire application on one host, only the database server traffic would then hit either the NIC (in the case of NAS), or the HBA (in the case of fiber channel storage). So network virtualization has the potential to reduce the number of NIC ports required on each physical server. Can you spell “network consolidation”? To be clear, there is unlikely to be a 10 to 1 reduction in switch ports as there was with servers, but removing a third to a half of the NIC cables (and therefore the required top of the rack switch ports) might be achievable.
- There is a more profound and longer term effect to moving both the control of the switching and the switching itself into software. That longer term and more profound effect is that it will make expensive and smart switches (like the ones from Cisco) much less necessary. If all of the smarts are in VMware’s software, who needs a smart switch? No one; you just need a stupid switch that can move the bits. So network virtualization sets the stage for the complete commoditization of the switch business. These two points perhaps explain why Cisco was not thrilled when VMware bought Nicira.
As with CPU virtualization, virtualization of storage is not something that VMware or Microsoft invented or are going to invent. Multiple methods of abstracting the physical storage from how storage is presented to workloads have been in use for quite some time. For example, LUNs are a logical abstraction of storage presented to workloads that are then mapped to the physical storage by the storage administrator. Several innovative vendors like Tintri and Virsto have each brought unique abstractions of storage to the market that have their own benefits.
So in order to understand what the benefits are of VMware’s addition of virtualized storage to their SDDC, we have to know exactly what VMware’s storage virtualization offering will be. Let’s therefore make an assumption. Let’s assume that what VMware does will be driven in part by what Microsoft is doing to allow customers to leverage commodity storage (disks in servers) in Hyper-V 3.0. VMware has an existing product, the vSphere Storage Appliance, that pools the local disk storage in up to three physical servers into a virtual pool of storage that offers a single point of access, redundancy, and high availability. What if VMware expanded the vSphere Storage Appliance to be able to handle an entire 32-server cluster? So the definition of storage virtualization would be that for a 32-node cluster, you would need neither NAS nor Fiber Channel attached storage as long as you could cram enough hard disks into to your servers to handle the data volume, and as long as the virtualized storage array met your performance needs.
If VMware did this, how would the benefits of storage virtualization stack up to the benefits of CPU, memory, and network virtualization? The likely answer is:
- Just as is the case with memory, virtualizing storage in this manner does not allow for multiple workloads to share storage in any more fundamentalÂ way than is the case now. If a workload needs 2TB of network attached storage, it is going to need 2TB of pooled local storage. So there is not a many to one consolidation benefit here as was the case with CPU virtualization and server consolidation.
- There will be a management benefit. It will certainly be easier to administer these pools of local storage than it is to administer the high-end arrays from EMC and NetApp. But the jury is completely out on whether these pools of local storage will offer the performance that these high-end arrays are proven to be able to deliver.
- If these pools of local storage can replace network and fiber attached storage arrays, then there is going to be a commoditization benefit similar to what will happen with expensive and cheap network switches. Direct attached storage in servers is simply much cheaper than network attached storage. If VMware can make this work for a substantial number of the workloads run by its customers, then VMware will be able to deliver dramatic reductions in the cost of storage to its customers. These cost savings will come out of the hides of the enterprise storage vendors, which promises to make the “partnering” meetings between VMware and these vendors even more interesting.
The benefits of virtualizing networking and storage will be very different than were the benefits of virtualizing CPU and memory. VMware’s success to date has not come at the expense of server vendors. However, VMware’s success with virtualized networking will come at the expense of Cisco. The success with virtualizing storage will come at the expense of enterprise storage vendors. By commoditizing networking and storage, VMware will deliver substantial benefits to its customers and create strained relationships with vendors who used to be partners.