If you are going to try to virtualize performance critical applications in 2012, you should arm yourself with a tool that can measure how those applications perform in the eyes of their end users – which is their end-to-end response time. The approach you take should be a function of the mix of applications you have to support – including whether they are purchased or custom developed and if custom developed with what language or framework.
Happy New Year and welcome to 2012 and as is customary this time of year, I would like to start things out in 2012 with a New Year’s Resolution. My resolution this year is to take the VMware VCP5 exam before the class requirement grace period comes to an end for all of us VCP4s. When I see that someone has made a post that they passed an exam that I need to take, I have a question that immediately comes to my mind. What did you use as your reference and study material? Since the clock is ticking for a lot of us to complete the exam, I wanted to share some of the resources that I will be using and hopefully if you have something else that you use, please comment and share your secrets.
Data Protection techniques should be implemented and tested long before they are needed. This is a necessary component of any IT organization. However, the most recent communities podcast brought to light several implementation aspects of Data Protection, specifically about Disaster Recovery: organizations still do not test their DR plans and organizations are waiting for a hardware refresh to implement a DR plan.
Business Agility ...
• • 0 Comments
VMware is going to make progress on its automated service assurance vision this year, with initial steps coming in the Q1/2012 version of vCenter Operations and the initial release of vFabric APM. On the third party vendor front, progress is most likely to come by partnerships between vendors who have interesting pieces of the puzzle, but do not have the entire puzzle themselves. On this front the most interesting vendors are Netuitive, Prelert, Blue Stripe, ExtrHop Networks, and VMTurbo. The wild card in this equation is how service assurance will fit with cloud management and offerings from vendors like DynamicOps, Abiquo, Platform Computing and Gale Technologies.
Private cloud management offerings are today very well suited to create and manage self-service scenarios for workloads that are either transient, or that require significant scaling of resources during the daily or weekly cycle of business activity. Private cloud management offerings are today not well suited to be the management solution through which all future workloads get provisioned an managed – but must become so, so as to participate in the further progress of virtualization. The best way for private cloud solutions to leverage the further progress of virtualization, is to help drive it- by helping to drive the concept of automated service assurance for business critical applications.
While the legacy enterprise management vendors might like to think of themselves as the Borg (prepare to be assimilated – there is no escape), the new technical requirements and the new buying patterns in the virtualization market do not lend themselves to a repeat of history. Legacy management vendors are unlikely to be able to acquire themselves into this market because their core platforms and business models do not work with the customers who are running virtualized environments and buying management solutions. So to my good friend Andi Mann, I respectfully disagree.
Stefano Stabellini, a senior software engineer at Citrix Systems, has announced a proof of concept port of the open source Xen Hypervisor for the ARM Cortex A15 processor. The project was started in early November and has already developed to the point where it is capable of booting a Linux 3.0 based virtual machine up to a shell prompt. The Xen port has progressed so rapidly due to a decision to take advantage of the virtualization features that were introduced with the ARMv7 architecture making it small and comparatively easy to develop. However, because of this it won’t be able to run on anything older than a Cortex-A15 processor.
The management ecosystem for virtualization started to transform significantly in 2011, driven by VMware’s new management strategy and management offerings. The big four are now boxed into an untenable position with expensive software that is hard to buy and hard to deploy. In 2012 there will be aggressive partnering in the ecosystem as vendors try to compete with the VMware suite by integrating with other vendors who have adjacent functionality.
The speed that technology changes are absolutely amazing in that as soon as you buy something, the next faster, bigger model comes out. I think back to around when I started my career and remember a workstation that I was using with a 200MHz processor and I was really thrilled when I got it bumped up to 64MB of ram. Now although the hardware was changing at blazing speeds you used to know you had a three to five year run with the operating system before you had to worry about upgrading and refreshing the operating systems. VMware has been changing the rules the last few years on major releases coming around every two years.