The Virtualization Practice

Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, ...
VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

VMware has done the right thing by taking care of their enterprise customers and making sure that they know that they can purchase vSphere 5 licenses under the terms of their existing ELA’s. The vast majority of smaller customers who run a small number of purchased applications are unlikely to be impacted by the new vRAM licensing, as their is probably plenty of vRAM headroom to take care of their needs. The issue is with customers who are not quite large enough to have an ELA, and who have sophisticated mixes of purchased and internally developed applications – and who are trying to push the density envelope in order to maximize their return from their investment in VMware. This customers are going to have to look at the new licensing in the above terms and make their own decisions.

VMware – A Train with an Engine, 3 Boxcars, and a Caboose

VMware is already the most important, and with vSphere the best systems software vendor on the planet. This is true not only based upon the current success of the vSphere platform, but the quality of the long term strategies in place for vFabric, vCloud, and vCenter. With vSphere 5, VMware can ill afford distractions that detract from the momentum of the attack upon the remaining 60% that is not virtualized. The strategic investments in vFabric, vCloud, and vCenter then call into question of viability of having a desktop virtualization business (View) that is today in product and tomorrow in vision a minor subset of what Citrix is delivering and articulating.

The single most dangerous part of this new pricing (to VMware) is rooted in the following fact. What is left to virtualize is very different from what has been virtualized to date. If what VMware has done is change its licensing around to replace one metric (cores) with another (vRAM) in a manner that would have allowed it to get the same revenue from its existing customers to date, then VMware has totally missed the boat.

Licensing:  Pools and Architecture Changes?

In the past, virtualization architects and administrators were told the best way forward is to buy as much fast memory as they could afford as well as standardize on one set of boxes with as many CPUs as they dare use. With vRAM Pool licensing this type of open-ended RAM architecture will change as now I have to consider vRAM pools when I architect new cloud and virtual environments. So let’s look at this from existing virtual environments and then onto new virtual and cloud environments. How much a change will this be to how I architect things today, and how much of a change is there to my existing virtual environments? Is it a better decision to stay at vSphere 4? Or to switch hypervisors entirely?

Over the last few months an additional subproject codenamed Quantum has emerged which deals explicitly with networking and has particpation from networking giants Intel and Cisco as well as from Citrix. It’s a mechanism for defining network topologies aimed at providing Layer-2 network connectivity for VM instances running in clouds based on the OpenStack cloud fabric. It is designed to be extensible to allow higher-level services (VPN, QoS, etc) to be built on top, and to cleanly handle the “edge of network” problem (i.e. the binding of the cloud into the internet).

vSphere 5 – Virtualize Business Critical Applications with Confidence

Just in time for the adoption of vSphere 5 by enterprises seeking to virtualize business critical and performance critical applications, AppFirst, BlueStripe, and ExtraHop have pioneered a new category of APM solutions. This new category is focused upon allowing IT to take responsibility for applications response time for every application running in production. This is an essential step on the road toward virtualizing the 60% of the applications that remain on physical hardware.

Countdown to Launch

Friday was the day that the last space shuttle will be launch in to space. The shuttle Atlantis is on the launch pad and ready to go. As I watched the clock countdown to zero, I found myself reflecting on the idea that this launch will be the very last space shuttle flight. I grew up in Florida and have been able to walk outside and watch the shuttles, over the years, launch into space. I have enjoyed watching the launches as well as feeling the sonic booms when the shuttle would fly overhead on the way to the runway for touchdown. For me and many others, this launch signifies the end of an era and the start of something new.

ExtraHop has now made an important contribution to the question of how to measure applications performance across physical and virtual environments. Properly deployed ExtraHop can play a critical role in helping enterprises virtualize the 60% of the remaining applications that are “hard”, “performance critical”, and “business critical”. As vSphere 5.0 is right around the corner, the timing could not be better.