When your vSphere envioronment gets big, managing it becomes a big data problem, requiring real time or near real time data collection, complex real time analytics, and the ability to store massive quantities of data arriving at a high data rate.
Quest (vFoglight 6.6), vKernel (vOPS 4), VMTurbo, Reflex Systems, Xangati, and Cirba (Data Center Control 7.0) have all made significant product enhancements which are being demonstrated at VMworld this week. These announcements largely reflect the increasing level of sophistication in these tools, and the emergence of Hyper-V as the hypervisor upon which cross-platform management strategies are initiated.
• • 6 Comments
Enterprises considering virtualization performance and capacity management solutions at VMworld 2011 should take a look at VMware vC OPS Enterprise, Netuitive, Quest vFloglight, NetApp Insight Balance, Reflex Systems, Veeam nworks, vKernel, Virtual Instruments, VMTurbo, Xangati, and Zenoss. Read the full post for the evaluation criteria.
Business Agility ...
• • 0 Comments
So you are a loyal VMware customer. You have licenses for vSphere 4 and you are about 40% virtualized. Based upon the revised vRAM entitlements in the revised vSphere 5 licensing, you think you are going to be OK as you progress through the more demanding business critical purchased and custom developed applications that lie in front of you. But you would like a hedge and a simple way to manage the second hypervisor that is a part of that hedge. Help has arrived.
, , • • 5 Comments
Ovum’s research found that desktop virtualization currently represents approximately 15% of the business PC market. However, this figure is dominated by the Presentation Virtualization model (12%), typically used in call datacenter-type environments, and has been for the last 10 years. If PV/terminal services are excluded, the next generation of solutions aimed at CIOs, from the likes of Citrix, Quest and VMware, hold less then 3% of the market, showing that many CIOs are holding back from taking the plunge.
What is still missing here is any kind of an end-to-end view of infrastructure latency that is also real time, deterministic and comprehensive. The marrying of the SAN point of view with the IP network point of view is the obvious combination. The hard issue here will be the identification of the applications so that these view of infrastructure performance can be surfaced on a per application basis. In summary, we have a long way to go here, and this just might be why so many of those virtualization projects for business critical and performance critical applications are having so much trouble getting traction.
Monitoring the performance of the infrastructure, applications and services in IT as a Service environments will require that monitoring solutions become multi-tenant, can be instantiated by ITaaS management tools without any further configuration, and that they automatically “find” their back end management systems through whatever firewalls may be in place. These requirements will probably be the straw that breaks the camel’s back for the heavyweight complex legacy tools that were in place prior to to the onset of virtualization, the public cloud and now IT as a Service. ITaaS is the tipping point that should cause most enterprises ignore every monitoring tool that they have bought in the past and to start over with a clean sheet of paper.
Cloud Performance Management needs to evolve and allow cloud vendors to provide their customers a customer specific Infrastructure Response Time metric. This in conjunction with cloud aware Applications Performance Management solutions is needed in order for customers to feel comfortable putting business critical applications in the cloud.
Infrastructure Performance Management is the single most important performance and capacity management issue that owners of a virtual environment need to address. The reason for this is that since the low hanging fruit has been virtualized, what is left is business critical and performance critical applications in the hands of applications owners and their business constituents. In order to convince these groups that the virtual infrastructure is performing acceptably in support of these important applications Operations groups in charge of virtual environments need to move beyond trying to infer infrastructure performance from resource utilization patterns.