The last mile of virtualization has multiple dimensions, based on where you are going with virtualization. When you ask about the last mile of virtualization—about what it will take to get to 100% virtualized either within your data center or within a cloud (hybrid, public, or private)—the “it depends” answer is the one you will…
There is a class of applications that is extremely difficult to virtualize. This group consists of graphics intensive applications such as ProEngineer, Photoshop, and pretty much anything that requires a GPU to perform well. Graphics intensive applications make up a list of applications that are usually too big or expensive to virtualize. The last mile so to speak. This is NO longer the case. With NVIDIA’s announcement of the NVIDIA VGX Cloud Platform. This and other classes of applications can now be virtualized.
A customer recently asked me, can we virtualize our Tier 1 App that receives 7Billion requests per day? My initial response was, on how many servers? Their answer was 15. This is quite a shocking set of numbers to consider. Add into this numbers such as 150K sessions per second, the need for a firewall, and sub-second response time and you end up with a few more shocking numbers. So could such workloads be virtualized? or is it too big for Virtualization?
There have been a recent set of VMware Communities questions that have got me thinking about the prospect of virtualizing high performance computing (vHPC) and whether or not this is even practical, reasonable, and would give any gains to HPC. I think there are some gains to be made but with everything there are some concerns as well. This is of interest to me as at one time I was deep into High Performance Technical Computing and marrying Virtualization to HPC/HPTC would be a very interesting option.