The Virtualization Practice

Performance Management

Performance Management covers monitoring the physical infrastructure, the virtual infrastructure and applications for end-to-end performance and service levels. It covers Application Performance Management, Infrastructure Performance Management, Operations Management, Capacity Planning, and Capacity Management. ...
Environments covered include Virtualization Performance Management, Software Defined Data Center Performance Management, and Cloud Performance Management. Key issues include ensuring the performance of virtualized and cloud based data centers, ensuring the performance of software defined data centers (SDDC performance management), ensuring virtualized application performance, cloud application performance, and SDDC application performance. Key vendors covered include VMware, AppDynamics, AppEnsure, AppFirst, AppNeta, Astute Networks, Aternity, BlueStripe, Boundary, Cirba, CloudPhysics, Correlsense, Compuware, Dell, Embotics, ExtraHop, GigaMon, Hotlink, HP, Intigua, ManageEngine, New Relic, Prelert, Puppet Labs, Riverbed, Splunk, Tintri, Virtual Instruments, Virtustream, VMTurbo, Xangati, and Zenoss.

As business critical applications move into production virtualized environments, the need arises to ensure their performance from a response time perspective. Legacy Applications Performance Management tools are in many cases not well suited to make the jump from static physical systems, to dynamic virtual and cloud based systems. For these reasons enterprises need to consider new tools from vendors that have virtualization aware and cloud aware features in their APM solutions. Vendors like AppDynamics, BlueStripe, dynatrace, New Relic, OPNET, Optier, Quest, and VMware (AppSpeed) are currently leading this race to redefine the market for APM solutions.

Virtual CPU’s, CPU Ready and Applications Performance on vSphere

In a virtual system the tenancy to translate over-provisioning physical CPU’s into over-provisioning virtual CPU’s can be very harmful as the graph above shows. Assigning four vCPU’s to a VM makes it harder for that VM to get scheduled in as the hypervisor has to wait for four vCPU’s to become available at the same time. It is therefore the case that configuring a smaller number of vCPU’s for an application can actually improve the amount of CPU resource that it actually gets and therefore improve its performance. Investing in tools (like VMTurbo) that do this work for you automatically can help you convince applications owners of this, and thereby help their applications perform better.

The Freemium sales model is a business model innovation best suited to inexpensive products that are very easily understood (and therefore not very new or very different) and that solve an obvious problem in a manner that is more convenient for the customer to acquire and implement. There are not many new virtualization and cloud technology companies who set out to produce undifferentiated products which suggests that a general application of the Freemium model to startups in our ecosystem is ill advised. Enterprise customers should pay great attention to products that are being marketing in this manner to ensure that they do not end up growing the use of something that was purchased in a tactical manner into a strategic use case.

New Relic announced that it now support four application types, Ruby-on-Rails, Java, .Net and PHP. New Relic has therefore broken new ground in the question of trade-offs between depth of monitoring into an application, and breadth of platform support. The prior generation of byte code instrumentation vendors never supported more than two platforms – J2EE and .Net. Products that monitor the OS still cannot see into an application the way that New Relic can – and now New Relic brings this depth of insight to more platforms than anyone else has ever address.

If we are going to start over, why not really start over and reinvent the entire infrastructure and management software industries in the process. That way we end up with an infrastructure that was actually designed for the dynamic, agile, and scalable use cases that we are trying to address with a green field approach, and an appropriate set of management tools as well. Is this going to happen? You can bet that there are already VC funded startups in stealth mode working on it.

It is also interesting to speculate what long term role the Hyperformix statistical modelling technology will play in CA performance management and performance assurance products. VMware has puts its stake in the ground via the acquisition of Integrien that only a real time and self-learning approach will be able to keep up with variability inherent in a virtualized or cloud based system in order to provide effective root cause analysis. It is possible that over time this modelling technology will evolve into a real time self learning performance management capability analogous to what is p; provided by VMware/Integrien and Netuitive. If this occurs this will mean that CA will be the first and only one of the big four systems management vendors with an effective root cause strategy for the new dynamic data center.

I saw a question get posted on twitter that kind of intrigues me a little. The question was pretty straight forward. “How many virtual machines should I be able to run on a host?” That is really a fair question in itself but what I find intriguing is that this is the first question he asks. Is this really the first thing administrators think to ask when designing their environment? After all there is no set formula on how many virtual machines you can run on a host. You can be a little more exact when working with VDI because for the most part all the virtual machines would be set up pretty much the same way and the numbers can be a little more predictable. That would not be the case when working with server virtualization. You are going to have servers all with different configurations and amount of resources provisioned to the virtual machines. This variation is what will change your slot count and the amount of virtual machines you can run on the host.

Buying the Managed Objects assets of Novell would give VMware a credible entry into the Business Service Management realm with product assets that could compete head to head with those from CA, IBM, HP and BMC – especially on the VMware platform. However there were significant issues with BSM as implemented by all of these vendors, and acquiring a BSM product set would not in an of itself address those all of those (Integrien helps with root cause) issues. The real answer here remains a virtualization and cloud competent performance assurance capability which should be attainable without recreating the baggage of BSM.

VMware’s 5 Businesses and the “New Stack”

VMware dominates the enterprise virtualization platform business with vSphere, and is poised to create a vSphere compatible public cloud ecosystem around vCloud. Layering Management software on top of these platforms is a logical progression up the value stack, as is layering an applications platform (vFabric) on top of vSphere and vCloud. VMware’s end user computing strategy seems to be too tied to VDI to be able to break out of the fundamental limitations associated with this approach, and will likely leave the larger question of how to manage the next generation desktop to the previously mentioned startups and perhaps Symantec.