Is it time to plan for the virtual future in our virtual designs? Happy New Year and welcome to 2013!!  What a year 2012 turned out to be for virtualization and cloud computing in general. Microsoft Hyper-V, Red Hat, and VMware have all made quite a few enhancements to the hypervisor, and we have finally reached a point where we really have some good competition between hypervisors. Also, the competition boundaries are being expanded to include much more than just the hypervisor itself as we start to focus on the ecosystem as a whole. Toward the end of 2012 the industry had really begun presenting multi-hypervisor management capabilities and solutions. I see this area as something to really watch in 2013, and I propose this question.

Although multi-hypervisor management is still in its infancy, I think that it is time we start the planning process for the future. What I mean by that is, as we move forward with new deployments and/or infrastructure refreshes, we should be building the capability of running multiple hypervisors into our virtual designs. The trend of running multiple hypervisors, in my opinion, will continue to trend in the future. I am not saying the complete design for multiple platforms should be presented, but rather in our designs we should have a placeholder available and marked off so that if the decision is made to deploy a parallel hypervisor, we will have something ready for the future when called.

I have been talking with a few companies lately that are in the planning stages for their virtual refresh, and during these conversations I like to ask this question: What are the plans, if any, for the deployment of multiple hypervisors? The answers that I have been hearing lately are that the idea is being considered but for now we are going to concentrate on one hypervisor, and we will revisit this idea once the initial deployment has been completed. That answer is neither a “No” nor a “Yes”, and to help make life easier in the long run, consideration for this kind of deployment should be evaluated, but up front with a place to put things and at least a general idea of how we could manage this should we be called to task.

I understand that things are changing rapidly, and what we would consider doing now might not be what we would actually do tomorrow. For example, we could say that we have been tasked to deploy the VMware vCloud Suite onto a datacenter with two clusters (management and production). Once we have finished the virtual design, we could present Hotlink technologies as an integration engine that would give us the ability to branch different hypervisors off the main virtual design, which would give us a plan for the future. When the time comes to actually deploy the branched off-hypervisor, Hotlink might not be the best choice, and Microsoft System Center could be a different choice. We could easily replace Hotlink with System Center in the design and still be able to work from the initial plan.

My point is this: technology will change, but the time has come to start planning for what the future may bring and to at least have a plan in place that can be modified as technology changes over time. Why do we need to reinvent the wheel for each technology? We should be planning for heterogeneous environments from the start.

Share this Article:

Share Button
Steve Beaver (151 Posts)

Stephen Beaver is the co-author of VMware ESX Essentials in the Virtual Data Center and Scripting VMware Power Tools: Automating Virtual Infrastructure Administration as well as being contributing author of Mastering VMware vSphere 4 and How to Cheat at Configuring VMware ESX Server. Stephen is an IT Veteran with over 15 years experience in the industry. Stephen is a moderator on the VMware Communities Forum and was elected vExpert for 2009 and 2010. Stephen can also be seen regularly presenting on different topics at national and international virtualization conferences.

Connect with Steve Beaver:


Related Posts:

2 comments for “Virtual Future in our Virtual Designs

  1. January 9, 2013 at 10:28 AM

    There’s something to be said for building or acquiring a management platform that’s can do heterogeneous management. I think its important that customers don’t feel “locked in” and they have choice. Out of curiosity which do think would be easier to manage or less costly to manage a – heterogeneous hypervisor strategy or one that focused on a single supplier? I hear a lot of folks talking about heterogeneity, which is all to the good – but we seem to talk less about one of the so-called golden rules of admin is reduce your differences/variances in configurations/systems used – whereas a heterogeneous approach introduces more differences…

  2. Steve Beaver
    January 9, 2013 at 2:24 PM

    Mike,

    You make a very good point on the admin golden rule but I would like to present the idea of what history tells us things will be like. There are some places that will be a completely one vender shop but for a lot of environments there have always been heterogeneous environments that could not do everything with one vender. No matter how hard they tried they would still have mainframes, Solaris, Linux and Microsoft and I think that history will repeat itself and stay that way moving forward. I have seen a push for Mainframe apps to be ported to Linux in an attempt to make things easier but for larger sites I think you will continue to see trend of heterogeneous servers and feel the hypervisor will be the same.

    The point of my post is just to have a plan in case your environment ends up going that way so you have a plan on how you would make this possible. Preparation is my point and history is my reference.

Leave a Reply

Your email address will not be published. Required fields are marked *


+ eight = 10