Big changes are afoot in the management software business. First Quest Software agrees to get acquired by a private equity firm – usually a sign that some cuts need to be made that would trash a company’s stock if it were publicly held.  Then rumors crop up that Dell was going to acquire Quest which would certainly transform the virtualization management business as discussed in this post.  Now comes news from Bloomberg, that the Dell/Quest deal is off, at least for the time being. So what is really going on here?

The History of Innovation in Platforms and Management

Since the modern (post mainframe) IT industry got started with mini-computers in the late 1970’s there have been multiple waves of innovation:

  • Mini-computers which made computing less expensive and easier to access than was the case with mainframes
  • The first Apple and IBM personal computers
  • The growth of the PC compatible market lead by Dell and Compaq (now part of HP)
  • The “year of the LAN” lead by Novel
  • The first generation of client/server computing lead by Windows 98 on the desktop and Windows NT servers
  • The arrival of TCP/IP as the protocol to rule them all, leading to the Internet and the World Wide Web
  • Microsoft Visual Basic, the first generation attempt to make application programming accessible to a wider set of developers
  • The Java programming language and Java applications servers, the next generation advance of client/server
  • The embrace of both Microsoft and the Java community of web services and service oriented architectures
  • The rise of laptops as the preferred vehicle for personal computing, which was, of course, just a precursor to today’s tablets and smart phones

In all of the above cases, new management companies came into being to mange the new elements of the computing architecture. And many of these new management companies eventually got acquired by the incumbent leaders in management, CA, IBM, BMC, HP (which became a management company by acquiring Mercury Interactive), Compuware, and Quest. These leaders in the management business then eventually integrated the newly acquired offerings into their frameworks and managing the new thing became just a feature of the entire management suite.

Virtualization, Cloud Computing and Agile Development are Disruptive Forces

However, this time something very different is afoot. Virtualization and Cloud Computing are combining with Agile Development and new ways of delivering and selling software to simultaneously produce a set of changes that are unlike any that we have every seen before. The factors that make these changes unique are:

  1. Virtualization breaks the old method of inferring system and application performance from resource utilization. It simply no longer works. This means that the products from the legacy vendors that rely upon this method of measuring performance a fundamentally broken and cannot be fixed simply by having the legacy vendor buy a startup and glue its products to the side of the existing framework. This problem is evident when enterprises virtualize a server and uninstall the management agents from the legacy management vendors as a part of the virtualization process.
  2. Cloud Computing introduces two entirely new problems. Private clouds mix workloads from different internal constituents in one virtual data center. This introduces both a multi-tenant performance management problem and a multi-tenant security problem. Public clouds have the same problem as private clouds, but also introduce one more issue – a separation of the infrastructure and the application into two separate companies where the company that owns the infrastructure is simply not transparent and forthcoming as to what is really going on under the covers.
  3. The demand for business functionality coded up in software is infinite. This is leading to innovations in tools (Java, Ruby, .NET, PHP, Python), and development processes (Agile and DevOps).
  4. New vendors like SolarWinds, Splunk, Veeam, New Relic, AppDynamics, Confio Software, AppFirst, ManageEngine, VKernel (now part of Quest), VMTurbo, Xangati and Zenoss have cropped up and employed a new business model focused upon the dual concepts of easy to try and easy to buy.
  5. The combination of virtualization, private clouds, public clouds with rapidly arriving and rapidly changing applications, and a new way of selling and buying software creates a perfect storm for the management software industry.

Quest Software was the first large vendor to react intelligently to this new reality by turning the vKernel acquisition into a standalone business unit instead of doing the usual thing which would have been to incorporate the vKernel features into vFloglight. In fact Quest Software did exactly the opposite and incorporated the vFoglight assets into the vKernel product line. CA has gone and completely bungled the NimSoft acquisition by running off all of the people that made Nimsoft successful and by tightly integrating Nimsoft into the CA “borg”.

What Should Enterprises Do?

The first thing that every enterprise should do is realize that the set of management products that you have purchased and deployed to manage your existing physical assets are not going to be terribly helpful in the world you face in your virtualized and cloud based future. You do not need to rip these products out, but you should certainly not buy any more of them, and you certainly should start to put tremendous pressure back on legacy vendors that are demanding large maintenance fees for products that have ceased to be strategic to you.

The next step is to design a new architecture of management solutions around what will likely be your future operating state. Some good assumptions here are:

  • A substantial part of your environment will be virtualized
  • You will likely end up with more than one hypervisor, perhaps one like vSphere for the demanding workloads, and a less expensive one like Hyper-V for the low hanging fruit
  • You will likely have some workloads running in public clouds, and might even have some workloads moving around between clouds or between clouds and your data centers
  • Having multiple infrastructures is fine. Having multiple hypervisors is fine. Having multiple hypervisors and multiple public clouds is fine. Having a separate management stack for each of these is not fine. Solutions like nworks from Veeam that let you use SCOM to manage vSphere, and Hotlink that let you use vCenter to manage Hyper-V, XenServer, and KVM are the first steps toward a sane solution to this problem.
  • So the first requirement for your new management stack is that it must work across N hypervisors, your virtualized environment, your private cloud, and whatever public cloud providers you contract with
  • Your infrastructure management strategy must be built around the reality that there will be infrastructure that you own and control and infrastructure that is completely out of your control
  • Your application management strategy must be built around the reality that your applications will be moving around between your virtual data center, your private clouds and the public clouds that you choose to use
  • Keeping your legacy tools for your legacy environment is fine. But do not assume that they can manage your new environment. Rather consider starting over with a brand new management framework like Zenoss, and then complement that with other infrastructure and applications performance management solutions.
  • Everything needs to get much easier to buy, easier to deploy, easier to maintain, more self-sufficient and self-configuring, and most importantly less expensive to procure and own. On this front legacy management tools (and home grown scripts) need to be replaced with next generation system like Puppet, Chef, ScaleXtreme or rPath.

Virtualization and Cloud Computing Management Software Reference Architecture

When you start over, the first thing you should create for yourself is an architecture that guides you in terms of what categories of products to evaluate and purchase. An example of such a reference architecture is below.

Virtualization Management Reference Architecture (click to enlarge)

We will be writing extensively about each of the vertical and horizontal layers in this architecture, and which vendors to consider in each layer in upcoming posts.

The End Goal in Virtualization and Cloud Management

VMware has got the vision in this regard 100% correct. The goal is to replace “monitor, alert, have the human fix it”, with “monitor, fix it, and notify the human”. To achieve this goal the following must occur:

  1. We must come to agreement on a new definition of infrastructure and application performance. Resource utilization is not it. At the infrastructure level it will need to be based upon end-to-end latency. At the application level it will need to be based upon end-to-end transaction response time. Vendors like Virtual Instruments and Xangati that focus upon infrastructure latency should be your starting points for understanding infrastructure performance. New APM vendors like New Relic, AppDynamics, ExtraHop Networks, BlueStripe, Correlsense, and AppFirst should be your starting point for understanding application performance.
  2. We have to end (or minimize) the “blamestorming meetings”. Those are meetings the application owner is screaming that it is slow and/or broken and everybody comes in with their reports from their point tools and says all of their lights are green. Per the point above, telling an application owner with a slow application that CPU utilization is normal is pointless.
  3. We are going to have to get a lot better a automated root cause analysis. We are either going to have to be able to deterministically link transaction performance to infrastructure latency (which cannot be done today), or we are going to have to deploy automated performance analytics solutions from vendors like VMware (who put the Integrien technology into vCenter Operations), Netuitive, or Prelert. But relying upon humans to connect the dots is just not gong to work in the new world because the dots are not going to be changing too frequently and be too distributed for the humans to keep up with the relationships.
Conclusion
Virtualization and Cloud Computing introduce challenges to how systems and application must be managed that are not met by legacy solutions, and that furthermore break those legacy solutions. New business models for buying and selling software make the “golf course sale” approach employed by the legacy vendor an economic relic. These factors combine to create a perfect storm in virtualization management creating challenges and opportunities for customers and vendors alike.

Share this Article:

Share Button
Bernd Harzog (332 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *


× 7 = forty two