In our series of posts about the reference architecture for the software defined data center and the cloud, we make the case that the requirements for managing a SDDC and the cloud are so different from the requirements for managing dedicated physical hardware that these requirements will be met by new vendors instead of legacy management vendors.
What is So Different about Managing the SDDC and the Cloud?
Just to encapsulate, here is what is so different about these new environments:
- Agile development and DevOps are both creating an unprecedented rate of change at the application layer. New applications are being put into production at an unprecedented pace, and existing ones are being enhanced at an unprecedented pace.
- The explosion at the application layer is being enabled not just by new processes (Agile and DevOps), but also by new tools, languages, and frameworks like PHP, Ruby, Python, CloudFoundry, etc.
- Previously monolithic applications are being componentized and then deployed in a scaled-out manner to meet demand and load.
- The environment is becoming highly distributed across mixtures of private clouds, hybrid clouds, and public clouds.
- Workloads are no longer dedicated to physical servers, and often you can’t even be guaranteed which other workloads are running with yours on a server.
- CPU and memory are already virtualized. Networking and storage are headed that way.
New Requirements for Managing the SDDC and the Cloud
If you are an enterprise shopping for management solutions for your virtualized data center, your private cloud, your hybrid cloud, or your public cloud, you should focus on solutions that meet the following requirements:
- You should be allowed to try the product, for free, in production in your environment for a sufficient period of time so that you can convince yourself that it meets your needs in your environment.
- The product should not require professional services to implement or maintain. Modern environments change too quickly to allow for a solution that requires manual intervention to reconfigure it as the environment changes.
- The product should work across the virtualization environments and cloud environments that you intend to support in your enterprise. That means that if you intend to have VMware and Hyper-V in house, along with a hybrid cloud from a vendor along with Amazon, each product in your management stack should support all of these environments. Having more than one virtualization platform or cloud is fine. Have a separate management stack for each will cause you to repeat the mistakes (silo’ed management tools) that made management such a mess in the physical world.
- Your management tools should promote IT agility, business agility, and cost efficiency. Tools should expend the server to admin ratio. They should automate tasks that previously had to be performed manually. They should not require teams of expensive consultants to implement and maintain.
- The product should automatically discover what it needs to discover to do its job. If it is an operations management solution, it should discover the physical and virtual resources in the environment and configure itself accordingly. If it is an applications level solution, it should discover the applications it is supposed to manage automatically.
- In general, the product should require close to zero configuration. Again, in a rapidly changing environment, no one has the time to manually reconfigure a management solution at any layer of the stack.
- If the product does monitoring of any kind, you should look for products that collect unique and valuable data about the aspect of the stack that they are monitoring. Any product that just collects commodity data from the vSphere APIs and WMI is going to produce commodity results.
- Monitoring products should collect their data in as near a real-time, comprehensive, and deterministic manner as possible. The rate of change in the SDDC and the cloud requires a completely different approach to the collection of management data than was the case in the cloud.
- The vendor should have a strategy to put their data into a common big data back end shared by other management vendors. Note that it is very early in the adoption of this concept so few vendors actually have a big data back end. But the vendor should understand the value of having one and the value of having a data store that is shared between multiple vendors.
The handwriting is on the wall. Managing the SDDC and the cloud will upend the management software industry. BMC has been taken private because it has to go through changes to adapt to the new world, changes that cannot be done as a public company. IBM’s revenues have declined for five straight quarters. CA has resorted to lawsuits against vendors (New Relic and AppDynamics) that have out-innovated CA, and that are walking away with the new APM market. To its credit, HP is the only one of the big four that has launched a brand new product line of management solutions designed to meet the above needs. However, a new ecosystem of leaders of the new management software industry will soon emerge. And the old order in the management software industry will be completely disrupted.
Share this Article:
Latest posts by Bernd Harzog (see all)
- VMware vSphere 6 Attacks Amazon with “One Cloud, Any Application” - February 9, 2015
- VMware vSphere 6 Attacks Red Hat: VMware Integrated OpenStack - February 3, 2015
- Will the Public Cloud Be the Next Legacy Platform? - January 20, 2015