Data Center Virtualization – Virtualizing the Last Mile

Just like a Telco, the ‘last mile’ of Virtualization is often the most difficult, I would say even more difficult than the initial phase of virtualization. What do I mean by the ‘last mile’?

The 5-10% of systems that you have LEFT to virtualize.

These systems are your most highly used, too X to virtualize,  the most complex to migrate, dependent upon specific hardware, or travel around the world (such as laptops and other hand held devices). These issues are also highly political as well.

I an only40% Virtualized, Why should I worry about the Last Mile Now

In recent discussions with the group from Gestalt IT and Tech Field Day it was brought to my attention that many companies do not virtualize even close to the Last Mile. Should they be concerned? I would say yes. A proper architecture and design will get you close to be able to virtualize the Last Mile. Perhaps it is time to study what is keeping you from virtualizing more is it:

  • Politics
  • Non-x86 Systems
  • Systems deemed to big to virtualize

Politics of the Last Mile

The politics of the Last Mile generally fall into the category of mostly too X to virtualize where X is whatever will keep the system from becoming virtualized. I have heard to ‘secure’, ‘dangerous’, ‘critical’, etc. What it means however is really not what is said, it boils down to fear, uncertainty, and doubt due to either bad advice or historical limitations.

Look into the reasons the politics exist, usually it boils down to feelings of loss of control and money.  Loss of control is a hard one to combat and will take careful review of all the existing Virtualization Management tools that abound to provide managers with the sense of control they desire. This ends up being a discussion about the needs for such control and how to provide it within the virtual environment. Assuring the performance of the applications involved can be a critical part of resolving these issues.

The Money aspect could be related to existing contracts so perhaps you need to do plenty of planning to through the switch on migrations when the contracts expire or work with the contracting group to improve your hardware, services, or both to allow for virtualization.

Non-x86 Systems

Many of the Non-x86 Systems have their own mechanisms for Virtualization, the key is to discover what is necessary to provide this cost savings if it is applicable. You may find that the heavily utilized Non-x86 systems are difficult but not impossible to virtualize. A few tools that exist:

  • HP Itanium has HP VM or you could migrate from Itanium to Proliant based VMs.
  • Sparc systems can be virtualized on x86 hardware using Tranxition or a x86 version of Solaris.
  • IBM Mainframes have their own virtualization mechanisms

Tools and techniques abound for virtualizing Non-x86 systems, the goal is to find them by talking to the Vendors themselves and what they suggest. Some may suggest a containers like solution where a specific host has its resources split between the containers in use. Others may have virtualization tools similar to VMware vSphere, and others may have tools that translate from one instruction set to x86, etc. Research is required in order to determine what is best for these Non-x86 Systems.

Systems are too big to Virtualize

There are some systems that appear to be too big to virtualize such as High Performance Technical Computing Clusters, however these clusters themselves are made of many discrete components that could be virualized. I have discussed the mechanisms to virtualize these systems before.  Yet there may be another mechanism available to use discrete smaller units together to create a larger whole to virtualize large systems that have huge amounts of memory. Instead of running many on one, such a system would run many VMs on many nodes but see the nodes as single system. This can be achieved by using tools such as ScaleMP. ScaleMP allows you to take smaller systems, aggregate them together within a cluster that then would be presented them as a virtualized layer to a single operating system. If that operating system happened to be VMware vSphere, Citrix Xen Server, or Microsoft Hyper-V, then you would have a many to many configuration that would allow you to virtualize nearly any load. Ones that require large amounts of memory to run.


Going the Last Mile will require you to think out side the box and determine the exact requirements to fulfill this dream. However, there are still some technological challenges in going to 100% virtualized environments.

  1. People still want their desktops and laptops so VDI adoption has lagged
  2. There is generally a huge amount of networking hardware necessary to go the Last Mile. Top of Rack switches like Xsigo and those for the Cisco UCS go a long way to make use of IO Virtualization. However, there is still a long way to go for IO Virtualization. The virtualization hosts must support multi-root IO Virtualization to make better use of existing adapters.
  3. There is quite a bit of Politics that abound in virtualizing the Last Mile
  4. There are still security concerns with running everything virtual, which are mostly based on attacks that do not target the systems in use, yet.
  5. There is always a need for an out of band physical mechanism to ultimately manage the virtual environment if the virtualized management tools fail.

The Last Mile can be virtualized, but it will take research and a solid architecture that not only surpasses the technological hurdles but also the political hurdles.

In my configuration for example, I have only a few physical machines that are not virtualization hosts or have a virtualization duty such as providing storage or networking for the virtual environment. Those machines are laptops which also run virtual machines. You still need to access the virtual environment in some fashion!

Posted in End User Computing, IT as a Service, SDDC & Hybrid CloudTagged , , ,