AppDynamics has just raised $50m and New Relic has just raised $80m, both in preparation for going public. The legacy APM vendors are about to have a really serious problem. These funding rounds prove that some of the smartest investors in the world now believe that virtualization, cloud computing, new languages, and dynamic run time environments combine to create both a brand new set of requirements for a relevant management stack and the opportunity for a brand new set of vendors to be both the platforms for that management stack and the foundations of that new management stack.
Articles Tagged with AppDynamics
Legacy management software vendors like IBM, HP, BMC and CA are in deep trouble. They are in trouble across their entire portfolio of management solutions due to two simple facts. Their products are not suited for the new dynamic and distributed IT environment, and the way in which they sell and market those products is inconsistent with how the new buyers of management software want to buy those products. A great example of the trouble that legacy vendors are in is how CA and its APM solutions (Introscope) stack up against modern solutions like those from New Relic, AppDynamics, and Compuware/dynatrace.
We all pretty much know that we can buy Infrastructure as a Service (IaaS), Development/Run time Platforms as a Service (PaaS), Software as a Service (Saas), Security as a Service, Cloud Storage as a Service, among other things – but we can also buy monitoring as a service. We can buy monitoring at both the infrastructure level and the application level as a service. This is an intriguing idea, and one that is rapidly gaining traction. However Monitoring as a Service (MaaS) carries with it some unique benefits, but it also carries with it some trade-offs especially when evaluated against on-premise solutions.
One of the important questions that we should all frequently ask ourselves is, “How will virtualization and cloud computing be different this year and next year than they have been in the past”? One of the answers to those questions involves the kinds of applications that you are virtualizing, and/or putting in clouds (public or private). The short version of the answer is that the applications that are left to virtualize, are for the most part, very different from the applications that have been virtualized to date.
When VMware announced its new management strategy (monitor – fix automatically – notify the humans), at VMworld Las Vegas, that strategy was incomplete. It was incomplete because the thing that needs monitoring to ensure service quality is the applications that deliver those services. At VMworld Europe, VMware completed the strategy by announcing vFabric Application Performance Manager (APM), and clearly tying issues with applications to automated remediation in the infrastructure.
For quite some time we have taken the position that in order for the next 60% of the workloads and applications to get virtualized, that the staff operating the virtual environment is going to have to take responsibility for the performance and availability of the applications running on that infrastructure. The logic behind this is simple. If you want someone who owns a performance critical application to give up their dedicated hardware and move into a shared service environment, you are going to have to guarantee the performance of their application to them in order to be allowed to virtualize that application.