In “IT as a Service Reference Architecture“, we presented a a categorization of the the functionality and the products that are needed in order to construct an IT as a Service system. Purposely missing from this architecture was the question of how to monitor the performance of the services delivered from the service catalog via the underlying policies and automation in the IT as a Service stack.We left monitoring out, because it is not something unique to IT as a Service (as is a Service Catalog), but because it is something that is pervasively needed from the physical elements that support the environment all of the way up through the applications and services that are either manually delivered or delivered on an ITaaS basis.
However, it is clear that once one thinks about the ramifications of an operating IT as a Service environment, that such an environment will place new and unique requirements upon monitoring approaches and solutions. In order to get our arms around this, lets first understand what an functioning IT as a Service environment might look like, and what kind of activity it would produce:
- First and foremost, IT as a Service is about letting users and business constituents order services from an enterprise IT organization in the same manner that they might order applications and services from IaaS, PaaS, Saas, and DaaS (Desktop as a Service) cloud providers. In fact a service in the eyes of one of your business constituents might be a “mashup” of one or more applications provided by you on IaaS or PaaS basis with an application ordered up from a cloud vendor on a SaaS basis, and delivered to users on a DaaS basis. A great example of this is what Microsoft is promising with Azure – where users will be able to use the credentials from your internal AD server, log on once, and get services from IT, services from the Microsoft Azure cloud, and services based upon a mixture of the two without the user knowing or caring where the services come from.
- The rate of change in the set of services delivered by an IT as a Service system will be very high. There might be a high rate in the creation of new services, and there certainly will be a high rate in the change of existing services.
- The changes to these services will come from two sources. Business constituents will be constantly reconfiguring these services so that they meet their ever evolving needs. IT will need to be constantly updating the configuration and supporting software for these services so that they are compliant with security policies and up to date with the latest system software patches and updates.
So we have a rapidly growing set of applications and services, where both the applications and the underlying software infrastructure are changing rapidly, and there are no assumptions that can be made about how these services are constructed (imagine Service Oriented Architecture on steroids). What what would we conclude about how to ensure availability and performance (including end user experience) in such an environment?
- The first thing that has to happen is that performance management all all layers of the stack need to become multi-tentant and therefore aware of who the customer is that either the infrastructure or the application is serving. A great example of this (one of the few) is the integration Zenoss has done with VMware vCloud Director – tapping into VCD’s knowledge of who the constituents of the services are. This functionality will have to be implemented pervasively, and significant technical challenges exist. Infrastructure Performance Management tools will need to be able to know who the customer is when they measure the latency of I/O requests through the infrastructure chain. Applications Performance Management tools will also need to be fully aware who who owns the transactions that are being measured and traced through the application systems.
- Self-configuration and continuous discovery will become of paramount importance. One of the goals of doing IT as a Service is for IT not to have to manually deliver the services. If IT has to manually configure monitoring every time a new service is instantiated, this will mitigate against the one of the benefits that justify IT as a Service to begin with. Self configuration is one oft he reasons that VMware bought Integrien, and the reason why Netuitive’s unique self-learning, and automatic baselineing capabilities will proved to become even more valuable in IT as a Service environments than they are in today’s dynamic virtualized environments.
- For Infrastructure Performance Management solutions, the trend will move even more strongly in favor of agentless solutions as the infrastructure will be changing in configuration to rapidly for agent based approaches to be able to keep up. However, the value of true Infrastructure Response Time information will rise as this will be the only way for IT managers to measure and assure the performance of the dynamic IT environments that comprise IT as a Service initiatives. The games played by storage vendors discussed in Netapp to Acquire Akorri, may become limiting factors to the growth in IT a a Service, and storage vendors may finally be forced to be more publically transparent with their performance data.
- For Applications Performance Management tools the keys will be for the components that collect the data (agents or virtual appliances) to be able to be deployed by the IT as a Service management tools as a part of deploying the services themselves. The data collection agents must be able to monitor the services with the information provided to them by the ITaaS stack, and and then “phone home” to their management systems that may be residing in some other organizations network (imagine an agent at in an application at Amazon.com, and the management system in the IT Data Center or in another cloud). Leading edge APM vendors like New Relic, AppDynamics, Quest, Optier, and OPNET have all built cloud friendly communications and discovery into their products in anticipation of these needs.
- The question of how to measure end user experience for IT as a Service and public cloud based environments will become very important as business critical applications migrate into these environments. Due to the rapid rate of change and dynamic nature of these environments, it will likely not be very effective to infer end user experience from inside of these systems. Rather an “outside/in” approach that looks at transactions coming into the system and how long it takes the system to respond will likely be most effective.
What Should Enterprises Do?
If you are a user of IBM Tivoli, HP OpenView, BMC Patrol, or CA Unicenter, as well as any other Business Service Management tool built and marketed in the era of enterprise management frameworks, the first thing you should do is uninstall these tools as servers and applications move from a physical environment to a virtual environment, and then into ITaaS or Cloud infrastructures. Of the “big four” CA s is the only vendor that has (through acquisition) refreshed its technology portfolio that it can offer products that meet these new requirements. At the infrastructure layer the investigation for solutions should start with Akorri, CA, vKernel, Quest vFoglight, Virtual Instruments, VMTurbo, Xangati and Zenoss. At the applications layer the process should start with AppDynamics, BlueStripe, New Relic, Opnet, Optier and the Quest Foglight products. The most important part of ensuring proper service levels for IT as a Service environments will be to ensure that end users are having a proper experience, which argues for end user aware solutions like those from Knoa and Aternity.
Monitoring the performance of the infrastructure, applications and services in IT as a Service environments will require that monitoring solutions become multi-tenant, can be instantiated by ITaaS management tools without any further configuration, and that they automatically “find” their back end management systems through whatever firewalls may be in place. These requirements will probably be the straw that breaks the camel’s back for the heavyweight complex legacy tools that were in place prior to to the onset of virtualization, the public cloud and now IT as a Service. ITaaS is the tipping point that should cause most enterprises ignore every monitoring tool that they have bought in the past and to start over with a clean sheet of paper.