One of the most interesting things about watching a virtualization project is to watch what happens to the management agents and products that are used to monitor and manage the physical servers as they become virtual servers. As VMware does an excellent job of making available via it vCenter API’s all of the data the traditional management products collect either via agents or over the wire via protocols like WMI, most virtualization project leaders and administrators choose to uninstall legacy management solutions as a part of the process of virtualizing servers.

This leads rise to the question of how to monitor and manage a virtual environment or the question that most enterprises grapple with which is “How to monitor VMware?” This question has given rise to an entirely new infrastructure monitoring industry – focused upon the dynamic and abstracted virtualized environment characterized by VMware vSphere. It has also given rise to a new buyer for virtualization monitoring solutions – the team that owns the virtualized environment, often characterized by the person who owns the VMware environments in an enterprise.

What is most interesting when talking to enterprises who are buying monitoring and management solutions for VMware is how infrequently (like never) products from traditional management vendors like CA, IBM, HP and BMC even come up. The bottom line is that when enterprises are looking for solutions in this space, they look to Quest Software vFloglight, Veeam (with Veeam One and nworks), VMTurbo, vKernel, Xangati, Solarwinds, ScaleXtreme, Confio Software and Zenoss. What is most interesting when talking to these vendors is that when they compete for monitoring and management business in vSphere enterprises, they compete with VMware vCenter Operations, but they never see CA, IBM, HP, or BMC.

Therefore it is fair to conclude that to date the”big four” have completely missed the virtualization monitoring and management market. They have not only missed it, they have missed it badly. Not only have they missed it, but they have completely failed to address both the technical changes and business model changes that virtualization and cloud computing demand in the management space.

On the technical side the issue is that virtualization and cloud computing both create new requirements (deal with dynamic systems and abstracted resources), and virtualization and cloud computing breaks products that relied upon agents inside of Windows and Linux operating systems and static pre-configured systems to understand relationships and dependencies in the infrastructure.  This starts with invalidating the CMDB that most legacy vendors rely upon as the foundation of their understanding of the environment and affects ever other aspect of the products as well. What this means for the big four at a technical level is that buying a startup and gluing its product to an outdated and irrelevant legacy management solution neither improves the viability of the legacy management solution, nor does it improve the marketability of the startup’s product. In other words to address the virtualization and cloud computing markets the big four are going to have to do what a startup does – start over with a clean sheet of paper (and assume that most if not all of the legacy assets are worthless).

The problem for the big four is not just a technology problem, it is also a business model problem. The big four have survived for decades upon the idea of selling enormously expensive and complicated software to enterprises at very high prices and maintenance costs, very long timeframes (with lots of consulting to get the product implemented), and a high cost of ownership driven by the need for ongoing consulting to keep the product working, and high software maintenance costs.

Vendors focused on the virtualizaton and cloud computing spaces succeed with products that are easy to try, inexpensive to buy, inexpensive to own, and that provide near instant time to value. This is another reason why acquiring their way out of this mess is not going to work for the big 4. Gluing easy to buy, try and own software to legacy software that is expensive and hard to make work will not result in a competitive product.

Of the big four, CA is the only vendor that has at least tried to refresh its technology portfolio. CA has bought NetQos (which the potential to get CA into the virtual infrastructure performance business – but they blew it) and more recently Nimsoft (which had a viable business model built around selling integrated monitoring and service level management at a reasonable price to service providers and mid-sized enterprises).

On onto why and how CA is the first of the big four to address this issue in what will be for all of the big four a self-destructive exercise. At CA World this week, the CEO of Nimsoft, Chris O’Malley touted that IT organizations need to become “service brokers”, and should use the Nimsoft software to handle the resulting IT Monitoring, IT Service Desk and Cloud User Experience issues. CA is obviously now suggesting that Nimsoft is the new product around which enterprises should manage their virtualized and cloud based systems.

Not withstanding the fact that Nimsoft is far from a technical and market leader for solutions that address these use cases (see our Resource and Availability Management White Paper and our Managing Applications Performance for Virtualized and Cloud Hosted Applications white paper to see who is), CA has now given rise to an economic scenario that may well lead to its destruction and the destruction of the other three of the big four.

The economic scenario is simple. If you are an enterprise customer of CA, you may well be paying $1M or $2M a year in ongoing costs to keep your maintenance and support of your CA monitoring products current and functional. The same is true for you if you are a customer of of an enterprise management solution from HP, BMC, or IBM. Customers of modern products like Nimsoft, Xenoss, VMTurbo, Xangati, Quest, Veeam, Confio Software, or Solarwinds would pay one-tenth of what it would cost to manage an environment with legacy solutions from the big four.

So now CA has started the process of devaluing its own customer base by a factor of 10. A customer adopting Nimsoft will pay CA one-tenth of what that customer would pay if that customer kept on using and paying for CA’s traditional legacy enterprise solutions. At least CA has a strategy for keeping 10% of its revenue – IBM, HP and BMC seem completely clueless in this regard.

So if you are an enterprise customer of VMware and you are looking for a monitoring and management solution for your virtualization environment what should you do? Here are some suggestions:

  1. If you are using any products from CA, IBM, BMC, or HP to manage physical servers, then do not carry those products forward into your virtualization environment. That means uninstall them as you virtualize those servers.
  2. Read our white papers to create a requirements list that your new monitoring and management solutions must meet. Start from scratch. You may end up throwing out everything (all 150 products) that you own to manage your physical environment.
  3. Insist that your infrastructure management software costs fall in proportion to the costs of owning physical and virtual servers. If you spend $X to manage N physical servers and due to consolidation end up with one-tenth of the number of new physical servers, your cost to own infrastructure management software should also now fall by a factor of 10. So as  your physical server count goes down,  your cost of buying and maintaining management software should go down proportionately.
  4. Insist that you be allowed to use, in production, for at least 30 days, with no limitation in functionality or scale of environment, any management solution before you buy it.
  5. Insist that the product work out of the box. It should be told about your vCenters’ or other virtualization platform consoles and then self-configure. If a consultant comes in the box with the management solution – send the box back to the vendor.
  6. If you buy an APM solution for your virtual environment insist that it be affordable, discover your applications and their topology, and work for all of your applications. This knocks all of the legacy APM solutions out of the box. See our APM white paper on how to address these needs.
Summary
Virtualization and cloud computing have changed the requirements for management solutions in a way that no innovation in the history of our industry ever have. Previous innovations created new requirements, but did not break existing management approaches or business models. Virtualization breaks both the existing legacy approaches to managing applications and systems, and breaks how one must manage applications in this new environment. The revolution has only just started.

Share this Article:

Share Button
Bernd Harzog (324 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


p5rn7vb

Related Posts:

9 comments for “CA Starts the Race To Self-Destruction Among the “Big Four” in Virtualization Management

  1. November 15, 2011 at 8:21 AM

    One of the reasons that the big 4 may be doing better in this space than you suggest is that there is a class of customer where the deal is done based on a long standing relationship between executives rather than on capabilities. the players you list as being good in the space often don’t even know that the opportunity exists let alone get invited to participate. I don’t like it but recognize that it happens.

  2. Jan Klincewicz
    November 15, 2011 at 9:59 AM

    A very bold and thought-provoking article.

  3. Toshan
    November 15, 2011 at 10:50 AM

    Good perspective, but it sounds like the writer might be biased towards the sponsors. I think large enterprises have years to go before they can make the leap like small start ups can do. And these software will keep providing value for years to come.

  4. Bharzog
    November 15, 2011 at 6:06 PM

    Hi Mike,

    In the enterprises that I talk to, it is IBM, HP, CA and BMC who are not invited to the table by the virtualization team. I agree that the smaller vendors are often not invited to the enterprise monitoring table – but that is not where the virtualization management decisions in many accounts are being made these days.

    Bernd

  5. Bharzog
    November 15, 2011 at 9:35 PM

    Hi Toshan,

    Here is what we do at TVP. We go out of our way to cultivate analyst relationships with the vendors that have the best solutions in the virtualization and cloud computing spaces. Some of those relationships lead to sponsorship (because the vendor sees the value in being associated with our community) and some do not.

    Bernd

  6. Owen Cole
    November 16, 2011 at 6:16 AM

    Virtualisation is a vehicle not a destination. If you had an expensive and complex vehicle, would you want to know the oil pressure, the revs, gear positions and the amount of fuel ….. or would you want something that tells you you dont look like you are moving forward ! The old approaches of agents, embedded code, stack sampling, synthetic transactions all have their place…. but the fact that canot be denied is that all of the information that shows the application performance is on the wire….

  7. rick parker
    November 18, 2011 at 2:34 PM

    There is also uptime software but this is a fairly complete list. Monitoring a virtualized / cloud infrastructure becomes much more critical / required because more applications are running on / dependent on the same hardware. There are also hypervisor specific critical performance limits like vmware CPU ready that are I think can only be monitored with a hypervisor monitoring system. My current preference is vFoglight due to email alerting / reporting functionality but obviously there are quite a number of criteria that need to be evaluated. I am working on a formal Enterprise Monitoring Architecture to act as a check list to verify everything is being monitored as it needs to be. @parkercloud @fluidcomputing

Leave a Reply

Your email address will not be published. Required fields are marked *


2 + = eleven