The Virtualization Practice

As mentioned in a couple of recent posts, I have been building a prototype application using Open Source technologies that I plan to install on a number of available PaaS cloud platforms. The application is written in Groovy (with some bits in Java) and built on the Grails framework. This article is about my use of RedHat OpenShift PaaS and the controls available.

The single most dangerous part of this new pricing (to VMware) is rooted in the following fact. What is left to virtualize is very different from what has been virtualized to date. If what VMware has done is change its licensing around to replace one metric (cores) with another (vRAM) in a manner that would have allowed it to get the same revenue from its existing customers to date, then VMware has totally missed the boat.

RES Baseline Desktop Analyzer is a free, on-line, Microsoft Windows Azure-hosted service that allows you to gain visibility into your existing desktop infrastructure through a real-time analysis of your environment and user base. RES have shown interesting innovation in the presentation of their Baseline Desktop Analyzer. The tool can work well as an initial guide on the state of your current desktop estate. But, it acts as a guide, it can present a scale of the task. To know your desktop environment fully and to know how you will need to take-on a campaign of migration you will need a wider set of information and likely additional tools and support.

Licensing:  Pools and Architecture Changes?

In the past, virtualization architects and administrators were told the best way forward is to buy as much fast memory as they could afford as well as standardize on one set of boxes with as many CPUs as they dare use. With vRAM Pool licensing this type of open-ended RAM architecture will change as now I have to consider vRAM pools when I architect new cloud and virtual environments. So let’s look at this from existing virtual environments and then onto new virtual and cloud environments. How much a change will this be to how I architect things today, and how much of a change is there to my existing virtual environments? Is it a better decision to stay at vSphere 4? Or to switch hypervisors entirely?

Over the last few months an additional subproject codenamed Quantum has emerged which deals explicitly with networking and has particpation from networking giants Intel and Cisco as well as from Citrix. It’s a mechanism for defining network topologies aimed at providing Layer-2 network connectivity for VM instances running in clouds based on the OpenStack cloud fabric. It is designed to be extensible to allow higher-level services (VPN, QoS, etc) to be built on top, and to cleanly handle the “edge of network” problem (i.e. the binding of the cloud into the internet).

vSphere 5 – Virtualize Business Critical Applications with Confidence

Just in time for the adoption of vSphere 5 by enterprises seeking to virtualize business critical and performance critical applications, AppFirst, BlueStripe, and ExtraHop have pioneered a new category of APM solutions. This new category is focused upon allowing IT to take responsibility for applications response time for every application running in production. This is an essential step on the road toward virtualizing the 60% of the applications that remain on physical hardware.

Countdown to Launch

Friday was the day that the last space shuttle will be launch in to space. The shuttle Atlantis is on the launch pad and ready to go. As I watched the clock countdown to zero, I found myself reflecting on the idea that this launch will be the very last space shuttle flight. I grew up in Florida and have been able to walk outside and watch the shuttles, over the years, launch into space. I have enjoyed watching the launches as well as feeling the sonic booms when the shuttle would fly overhead on the way to the runway for touchdown. For me and many others, this launch signifies the end of an era and the start of something new.