Virtualization Performance Management – Linking Response Time, Load and Chargeback

In Applications Performance Equals Response Time, not Resource Utilization, we took the position that while for the majority of the applications deployed on physical hardware the general practice was to infer their performance by looking at normal vs. abnormal resource utilization statistics, once you virtualize an application, it becomes necessary to directly measure its response time in order to ensure adequate service to business constituents and end users. 

The the prior article we basically said that the two most important metrics are the response time of the application, and the load (transaction rate) that the application places upon the infrastructure. We suggest that IT organizations that want to virtualize more than low hanging fruit commit to service levels based upon applications response time at a specified maximum load.

Linking Chargeback with Applications Response Time and Transaction Rate

One of the most challenging budgetary issues that is involved in putting in place a private cloud is figuring out how to allocate out the cost of the infrastructure between the applications and their respective owners. In a physical world this was not so hard, since there is a lot of dedicated physical infrastructure that can be directly allocated to each application. In a virtual world, you often go from dedicated storage, servers and networking to a much higher level of sharing at the storage and servers levels.

To the extent that chargeback is being done in private clouds today, it is being done based upon how much of the resources in the environment are being used by various applications. Charging for the amount of storage used is a rational first step to figuring out how to allocated out the cost of storage. Similarly charging for the cost of servers based upon how much of their CPU and memory resources are being used also makes sense as a first step.

But just allocating out costs based upon resource utilization misses the most important point. That point is the linkage between service  level (the required response time of the application), the load that the application is placing on the environment, and the cost of hosting that application. By way of example, in this YouTube demo of  vCenter Operations, it is clear that one VM is saturating the capacity of an entire array to process I/O operations. In a case like this, it really does not matter what percentage of the hard disk the application uses, what matters is the percentage of the capacity to process I/O operations the application is using.

Therefore it is clear that an application that has to have an average response time of 1 second, with 99% of the response times being less than 1.2 seconds and with transaction rate of 10,000 transactions per second, will be more expensive to host than one that has either less stringent response time requirements, or a lower transaction rate. Recasting how chargeback is done to incorporate these concepts has the following benefits:

  • The discussion of Applications Service Level is in terms that the applications owners understand and agree to. They understand response time and they understand transaction rate. They really do not care about CPU, memory, network I/O, and disk I/O utilization rates.
  • The team that owns the virtual infrastructure or private cloud can then commit to service levels in response time and load terms. Since IT in general never committed to service levels in these terms on physical hardware this constitutes a major new commitment to service level on the virtualized platform, and therefore a major new benefit of the virtualized platform.
  • If IT can commit to service levels on these terms for virtualized systems then this might be the incentive the applications owners need to agree to have their applications virtualized.
  • Finally if all of this is charged back to the applications owners in terms that they find acceptable and valuable (you are paying for response time and transactions/second) virtualization of their applications will then make business sense.

So How To Do This?

The key to  pulling this off is to start with a product that can measure the response times and transaction rates for either all of your applications automatically, or at least measure response times and transaction rates for the applications you care about. Note that this is not the typical use case for an Applications Performance Management solution. The goal here is not to find problems in code, but rather to easily and automatically discover applications, instrument them automatically (no configuration required), and to then capture the response time and load statistics so that these can become the basis of SLA and Chargeback agreements between the IT organization and applications owners.

Some solutions that are appropriate for this use case are listed in the table below. Note that the only solution that will work for every one of your applications (assuming that they all communicate over the network) is BlueStripe. Every other solution works for a set of applications, but as a result comes with the benefits of a higher level of detailed diagnostics for that set of applications.

App Types
AppDynamics On Premise Java/.NET

green check

green check

green check

green check
On Premise All TCP/IP on
Windows or Linux

green check

green check

green check

dynaTrace On Premise Java/.NET

green check

green check green check
Coradiant On Premise All Web Applications green check green check
New Relic
SaaS Ruby/Java/
green check

green check

green check
On Premise Java/Net green check green check
On Premise HTTP/Java/

green check


Posted in IT as a ServiceTagged , , , , , , ,

Leave a Reply

Be the First to Comment!