News: Virtualizing the Last Mile

OpenStack Logo

There is a class of applications that are extremely difficult to virtualize. This group consists of graphics intensive applications such as ProEngineer, Photoshop, and pretty much anything that requires a  Graphics Processing Unit (GPU) to perform well. Graphics intensive applications make up a list of applications that are usually too big or expensive to virtualize. The last mile so to speak. This is NO longer the case. With NVIDIA’s announcement of the NVIDIA VGX Cloud Platform. This and other classes of applications can now be virtualized.

GPUs are not only used by graphics intensive applications but they are also used in high performance clusters which is generally another group of applications that are usually not virtualized mainly due to cost but also due to the degradation in performance. Now enter a virtualizeable GPU and these applications can now be virtualized by the simple acquisition of a new GPU designed for hypervisors. Hypervisor workloads will benefit from the HPC work NVIDIA has done to improve GPU parallelism, lower over all wait times as CPU cores no longer compete for GPU resources, and other improvements in GPU processing capabilities. I expect we will see more GPU involvement in the virtualization in the future.

OpenStack benefits the most from the introduction of the GPU Hypervisor, as NVIDIA has created drivers for Xen and I would expect KVM drivers will soon follow by the open source community. VMware and Microsoft may be a bit behind on drivers for their hypervisors.

While at the recent OpenStack conference, I spoke to many people who were trying to use OpenStack to produce High Performance Clusters and they were commenting that they would rather use GPUs to do the heavy lifting over regular processors. Even so, how could the NVIDIA VGX make this possible?

I envision a rack of 1U boxes with a top of rack Infiniband switch to handle the interconnects between nodes. This setup is fairly normal. Yet, the real change happens when you add a top of rack Aprius or Virtensys PCIe extender boxes into which you load up a number of these NVIDIA VGX cards to be used by the VMs within the rack, or even to span racks.

The immediate gain is for graphics intensive applications but the long term gain of this new technology would be the HPC community. So how did NVIDIA make this possible? By creating a new type of graphics board that allows multiple users to share the same board (4 GPUs with 192 Cores each). As well as the creation of a GPU Hardware Hypervisor which integrates with other hypervisors to allow users access to the GPUs. Granted it is another layer from a management perspective, but the gains are worth it. The administrator is able to assign GPU capability to each user. In the case of HPC, user translates to workload not a physical user sitting at a desk.  In essence, we can now assign GPUs capability to virtual machines and hence either users or workloads. But this would not be possible without the previous mentioned breakthroughs that surround GPU computing architectures (specifically NVIDIA Kepler).

The only questions in my mind are

  • can this GPU hypervisor layer  (which is, likely, mostly in silicon) allow an administrator to assign more GPU cores spanning multiple cards to a single workload? If we could do this, then HPC within the cloud is achievable.
  • when will the GPUbe available for laptops and workstations so that we can take advantage of this for our type II hypervisors we run for demo labs, etc.?
  • when will mezzanine cards be available for blade type devices?
  • will security and performance management vendors offload to GPUs within virtual environments?

For for more information click here.

Leave a Reply

Be the First to Comment!