Since the dawn of TCP/IP networks and distributed networks of Intel PC’s and Servers there have been large numbers of point tools designed to monitor and manage specific sets of infrastructure in these networks, and management frameworks from major vendors like CA Technologies, IBM (Tivoli), HP (OpenView and the follow on products) and BMC that were designed to manage the entire network. The frameworks were focused first and foremost on the availability of the hardware that comprises these networks, and have grown over time to look into network utilization and performance as well as resource utilization of servers.
Existing Issues with Frameworks
As waves of change have rolled through the enterprise IT industry, these products have lost much of their allure. This has occurred for the following reasons:
- Frameworks have proven to be complicated and expensive to implement and expensive and time consuming to manage. Frameworks require significantly staffed and dedicated teams of experts to keep them properly tuned so that they actually work and add value.
- Enterprise customers do not like being locked into one product that is so difficult to displace. Once a framework is installed, the enterprise customer has little in the way of negotiating leverage when the maintenance contract comes due. The customer has to pony up the cash or face a hugely expensive and time consuming project to evaluate a successor product and get it installed and working properly.
- Frameworks have had trouble keeping up with the rate of change in the industry. Since each framework is developed by one vendor, it is up to that one vendor to decide what changes in the IT environment that have occurred in the last year will get supported in next year’s release. This means that when something new happens like VOIP devices on the network or something really disruptive happens like a Cisco UCS it can take 12 to 24 months for reasonable support for the new element to show up in the framework.
- The slow rate of support for new innovations in the frameworks leads to the purchase of point tools by the enterprise to address the needs for monitoring of the new items. This is how J2EE Applications Management by vendors like Wily became a category into its own. It is also how tools to manage virtualized infrastructures have come into a category of their own. This contributes to “tool sprawl” which mitigates against cost effective management of the environment.
Requirements for Next Generation Solutions
Now that it seems clear that we stand upon precipice of virtualizing more than the low hanging fruit and starting to virtualize the business critical systems that comprise the core of IT and production application support we need to ask an important question. Will frameworks change rapidly enough to be able to morph into handling the new dynamic and virtualized data center, or will the transition to the virtualized data center make the framework irrelevant in the same manner that the CMDB has become irrelevant. To start to understand this, let’s look at the new requirements that virtualized and dynamic data centers will place upon an enterprise monitoring solution:
- Something needs to manage the basic availability and resource utilization of the virtualization environment (VMware vSphere) and it supporting physical infrastructure. Virtualization of servers is all about hard dollar ROI on the CAPEX side and agility and cost savings on the OPEX side. Therefore the resulting solution cannot be a traditional expensive to purchase, expensive to implement and expensive to maintain product as the current frameworks are. Therefore this function needs to be done as inexpensively as possible. The bottom line is that it needs to be commoditized.
- As with all credible solutions in the virtualization space the solution needs to support continuous discovery of a rapidly changing environment. The topology of the physical and virtual infrastructure needs to be continuously updated, through efficient polling of the underlying physical infrastructure (which is not capable of communicating configuration changes in the form of events) and effective listening of the configuration event stream from the virtualized infrastructure. This ties back to the issue of what is going to replace the CMDB which was the topic of a previous post.
- An effective replacement for an existing framework must be able to keep up with the tremendous pace of innovation that characterizes the virtualization and cloud computing markets. This solution therefore cannot rely upon any single organization to keep it up to date with these changes. An effective solution must therefore be either very open at the edges or open source at the edges so that anyone who has a device that needs to be monitored has their destiny under their own control.
The bottom line is that in order for something to replace legacy management frameworks for the dynamic and virtualized data center, it needs to be less expensive than the legacy frameworks, easily extensible, and support functionality (like continuous discovery and a real time CMDB) that is not present in existing legacy management frameworks. While there is no obvious slam dunk product to go either download or buy that meets all of these needs, there are some very interesting candidate solutions that bear evaluation and further study:
- Forget the idea of a framework, and use a collection of point tools. This is the default direction in which we have been heading for some time. The network team has a network tool, the storage team has a storage tool. The server team will have a server tools, and teams with important applications will have Applications Performance Management (APM tools).
- The above approach plus new virtualization and cloud specific tools. Due to the significant new requirements that dynamic data centers and data centers distributed across organizational boundaries (clouds) create, a new ecosystem of tools has emerged to meet this need. Leading vendors include Akorri, AppDynamics, BlueStripe, Hyper9, New Relic, Platform Computing, Xangati, and Zenoss. Even CA (the vendor of a framework) seems to be moving in this direction by acquiring Nimsoft (a monitoring vendor with many cloud vendors as customers), and launching virtualization focused solutions like CA Virtual Assurance, CA Virtual Automation and CA Virtual Configuration.
- Layer automatic and self-learning analytics on top of the collection of point products in order to create a multi-vendor framework. Both Netuitive and Integrien offer connectors to a wide range of products whose data is consumed by these analytics vendors in self-learning models that can take an indication of a problem in one product (for example a response time issue found by an APM product) and via correlation find the likely cause (perhaps a congestion issue in the network or storage layers). This approach provides the benefit of a framework (a unified approach to managing the entire environment) with the benefits of multi-vendor innovation to support the ever changing set of measurement points in the environment.
- Wait for the virtualization platform vendors to build out their management stacks. VMware has formed a management software unit and has a number of building blocks to work with including the assets acquired from Ionix, and internally developed assets like AppSpeed, CapacityIQ, LifeCyle Manager, and Lab Manager. Microsoft has SCOM to build upon which is quite competent at managing Windows, and since Hyper-V is built on top of Windows the stretch for Microsoft with SCOM is not as large a stretch as it might be for other platform vendors. However it is also that case that not since the mainframe has the vendor of a market leading platform also been the leading vendor of the management tool suite for that platform. This is particularly an issue for virtualization since virtualization is supported by a very diverse physical infrastructure that must also be managed in a cohesive manner with the virtualization layer which makes this an even more challenging problem for the virtualization platform vendors.
- Reinvent the framework around an open source model. Solutions that support open source innovation at the edge (the data collection pieces are open sourced so that anyone can build a collector for any device, service or application) hold the potential to combine the structured innovation of the solution at the core (someone is in charge), with open and distributed creation of data collection agents and scripts. Zenoss is the best example of a solution that combines enterprise scale and a dynamic model of the system, with a true open source approach to the support of monitored devices.
Systems Management Frameworks have provided an indispensable function to enterprises with large and business critical networks and data centers. However, frameworks have become a category of expensive and slow to innovate legacy software leading many enterprises to conclude that they must move beyond these products in order to properly monitor their newest environments including those that are based on virtualization and public clouds. New virtualization and cloud focused tools are providing support for these environments that is not present in legacy management frameworks. Self-learning analytics may replace the frameworks as the “manager of managers” or new frameworks may emerge out of the open source movement.
Share this Article:
Latest posts by Bernd Harzog (see all)
- VMware vSphere 6 Attacks Amazon with “One Cloud, Any Application” - February 9, 2015
- VMware vSphere 6 Attacks Red Hat: VMware Integrated OpenStack - February 3, 2015
- Will the Public Cloud Be the Next Legacy Platform? - January 20, 2015