It is the start of 2011 and I hope everyone has not broken their New Year’s Resolutions already. To start the year off, I would like to encourage and or challenge you to become a part of your local VMware User Group or VMUG as we like to call it. Last year I did a post on My Experience with VMUGs and I am a full supporter of this program and the good it can bring. Although I have a bias for the VMUGS over other types of user groups, the concept of people helping people rates high in my book and I would like to challenge you all to get involved.
It is the last few days of the year and time for a review of virtualization 2010. Although VMware was founded in 1998 it was not until 2001 that I first heard of VMware and played with the workstation product to be able to run different flavors of Linux. So for me, 2010 closes out a great year in virtualization as a whole as well as a decade of virtualization and what a ride it has been.
Open Source continues to be an important part of the mix in Virtualization and Cloud. Indeed, this year has seen major developments in established players at the Operating System and Hypervisor level, as well as a major new cloud entry at the IaaS cloud layer.
The question of whether and how to replace DRS is really a part of the question of what is in the virtualization platform and what is not. Clearly the virtualization platform consists of much more than the hypervisor. VMware would like to define the virtualization platform as all of vSphere Enterprise Plus, and then suggest that vCloud Director and its own performance management solutions are logical extensions of that platform. Enterprises need to be careful about where they draw their own lines in this regard. As VMware is a clear market leader both in terms of product functionality and enterprise installations, VMware needs to be given full credit for the quality of vSphere and its success. However full credit does not need to imply that one is 100% locked in to VMware solution as there is room to pursue third party IT as a Service, Performance Management, and Service Assurance strategies as well as replace/augment components in vSphere.
If you are a hyperscale (such as for the Cloud) data center manager, one of your top concerns is always how to get the maximum amount of computing work done per Watt of power consumed. With that in concern at the forefront Cloud Providers like Google, Microsoft, and Facebook have strong incentives to explore new solutions to delivering compute cycles. Rumors coming out of Facebook suggest that it is looking to move away from its current X86 architecture platform in favor of servers based on ARM Holdings Cortex processor range. Porting an entire service to a new processor platform may not appear to be a sensible direction to take but porting to a new architecture is more a financial consideration than a technical one. If the cost per unit of performance justifies it , it is cheaper to pay a few programmers to rework the apps for a new architecture than it is to buy more servers.
As business critical applications move into production virtualized environments, the need arises to ensure their performance from a response time perspective. Legacy Applications Performance Management tools are in many cases not well suited to make the jump from static physical systems, to dynamic virtual and cloud based systems. For these reasons enterprises need to consider new tools from vendors that have virtualization aware and cloud aware features in their APM solutions. Vendors like AppDynamics, BlueStripe, dynatrace, New Relic, OPNET, Optier, Quest, and VMware (AppSpeed) are currently leading this race to redefine the market for APM solutions.
Considering the success of Cisco’s virtualization friendly UCS platform it should come as no surprise to hear that Cisco is intending to extend its data center virtualization footprint to include desktop virtualization as well. However as last week’s announcement of the Cisco Virtualization eXperience Infrastructure (VXI) shows Cisco does not expect a straight repeat of its server virtualization strategy to win the day. While Cisco’s plan to encourage mass adoption of desktop virtualization is based on the same Unified Computing System (UCS) that is behind Cisco’s current server virtualization strategy, it’s approach is distinctly different.
• • 2 Comments
In a virtual system the tenancy to translate over-provisioning physical CPU’s into over-provisioning virtual CPU’s can be very harmful as the graph above shows. Assigning four vCPU’s to a VM makes it harder for that VM to get scheduled in as the hypervisor has to wait for four vCPU’s to become available at the same time. It is therefore the case that configuring a smaller number of vCPU’s for an application can actually improve the amount of CPU resource that it actually gets and therefore improve its performance. Investing in tools (like VMTurbo) that do this work for you automatically can help you convince applications owners of this, and thereby help their applications perform better.
• • 0 Comments
rPath has delivered automation for a crucial process, the management of software deployment across these new dynamic and scaled out environments that no one else has addressed, and that addresses a critical source of friction and errors for enterprises seeking to benefit from the agility inherent in the new ways of building and operating applications systems.