RES Baseline Desktop Analyzer is a free, on-line, Microsoft Windows Azure-hosted service that allows you to gain visibility into your existing desktop infrastructure through a real-time analysis of your environment and user base. RES have shown interesting innovation in the presentation of their Baseline Desktop Analyzer. The tool can work well as an initial guide on the state of your current desktop estate. But, it acts as a guide, it can present a scale of the task. To know your desktop environment fully and to know how you will need to take-on a campaign of migration you will need a wider set of information and likely additional tools and support.
In the past, virtualization architects and administrators were told the best way forward is to buy as much fast memory as they could afford as well as standardize on one set of boxes with as many CPUs as they dare use. With vRAM Pool licensing this type of open-ended RAM architecture will change as now I have to consider vRAM pools when I architect new cloud and virtual environments. So let’s look at this from existing virtual environments and then onto new virtual and cloud environments. How much a change will this be to how I architect things today, and how much of a change is there to my existing virtual environments? Is it a better decision to stay at vSphere 4? Or to switch hypervisors entirely?
Cloud Computing ...
• • 0 Comments
Over the last few months an additional subproject codenamed Quantum has emerged which deals explicitly with networking and has particpation from networking giants Intel and Cisco as well as from Citrix. It’s a mechanism for defining network topologies aimed at providing Layer-2 network connectivity for VM instances running in clouds based on the OpenStack cloud fabric. It is designed to be extensible to allow higher-level services (VPN, QoS, etc) to be built on top, and to cleanly handle the “edge of network” problem (i.e. the binding of the cloud into the internet).
Business Agility ...
• • 3 Comments
Just in time for the adoption of vSphere 5 by enterprises seeking to virtualize business critical and performance critical applications, AppFirst, BlueStripe, and ExtraHop have pioneered a new category of APM solutions. This new category is focused upon allowing IT to take responsibility for applications response time for every application running in production. This is an essential step on the road toward virtualizing the 60% of the applications that remain on physical hardware.
Citrix has purchased Cloud.com and this poses some interesting changes to the overall virtualization and cloud markets. One also has to wonder about the timing of the announcement to coincide with the same day as the big announcements coming out of VMware. I see this purchase as a mixed blessing to the market place, but also a renewal for Citrix.
Friday was the day that the last space shuttle will be launch in to space. The shuttle Atlantis is on the launch pad and ready to go. As I watched the clock countdown to zero, I found myself reflecting on the idea that this launch will be the very last space shuttle flight. I grew up in Florida and have been able to walk outside and watch the shuttles, over the years, launch into space. I have enjoyed watching the launches as well as feeling the sonic booms when the shuttle would fly overhead on the way to the runway for touchdown. For me and many others, this launch signifies the end of an era and the start of something new.
ExtraHop has now made an important contribution to the question of how to measure applications performance across physical and virtual environments. Properly deployed ExtraHop can play a critical role in helping enterprises virtualize the 60% of the remaining applications that are “hard”, “performance critical”, and “business critical”. As vSphere 5.0 is right around the corner, the timing could not be better.
What is still missing here is any kind of an end-to-end view of infrastructure latency that is also real time, deterministic and comprehensive. The marrying of the SAN point of view with the IP network point of view is the obvious combination. The hard issue here will be the identification of the applications so that these view of infrastructure performance can be surfaced on a per application basis. In summary, we have a long way to go here, and this just might be why so many of those virtualization projects for business critical and performance critical applications are having so much trouble getting traction.