Data Center Virtualization has spawned several entirely new categories and variants of management software. This is largely because data center virtualization alone was a large enough change to create new requirements that legacy management products could not meet. This created a new constituency for management solutions – the virtualization team – which proceeded to purchase management solutions that met their needs. This trend was facilitated by the “easy to try and easy to buy” business model that many of the new vendors of virtualization management solutions adopted. Out of this a new management software industry arose.
A New Management Stack for your Software Defined Data Center (SDDC)
If virtualizing CPU and memory was enough of a change to create an entirely new ecosystem of vendors (see our Solutions Showcase for a pretty good list), let’s consider what the Software Defined Data Center (SDDC) is likely to do to data center management. Let’s start by considering why we would even want to virtualize networking and storage in a manner similar to how we have virtualized CPU and memory. To understand why, let’s consider the potential benefits and ramifications of the SDDC:
- Management of all of the resources (CPU, memory, networking, and storage) will be abstracted from their underlying hardware.
- Management of these resources will now be done in the data center virtualization platform (vSphere), and not in the respective underlying hardware
- Unifying the management of these resources will make it much easier to collect all of the configuration that is required to support a workload into one place. If a specific workload needs a specific combination of CPU, memory, networking, and storage, all of that can now be set up and maintained in one place.
- The configuration of the virtual resources that support a workload can then follow that workload around. When you vMotion a VM (or an entire application system), the configuration of the resources required to support that workload in its new destination will follow the workload.
- It will therefore become much easier to change configurations as needs dictate.
- The ability to change configurations on the fly will make the entire data center more dynamic, resulting in an even faster pace of configuration changes to support the changes in workloads and how they run.
- Networking hardware and storage hardware will become progressively more commoditized, just as servers (compute and memory) have become commoditized. This will occur because all of the value associated with configuration and management will get sucked out of the hardware and into the layers of software that comprise the SDDC and the layers of software that manage the SDDC.
These factors will combine to create a set of requirements for SDDC management software that legacy products will not be able to meet. The scale of the SDDC and its rate of change will require that management software be completely redesigned to meet these requirements.
A SDDC Management Stack Reference Architecture
The diagram below proposes a way to think about how to build the management stack for your SDDC. Data Protection, Security, Operations Management, Infrastructure Performance Management, Application Performance Management, and Cloud Management will all need to be implemented across the entire SDDC. IT Automation and Self-Learning Analytics will need to be applied across each of the management layers. Critically, the entire new management stack will need to be supported by an underlying Big Data Architecture that can cope with the arrival rate and quantity of management data generated by the SDDC and the products that manage various layers in the SDDC.
We will go through how the SDDC drives the requirements for new products at each of the horizontal and vertical layers in the above diagram in subsequent posts. For now, let’s just consider some examples of why the SDDC will drive new requirements at each layer:
- Data Protection – If we are going to virtualize storage, we are going to abstract the management of storage from the underlying storage itself. But at the end of the day the data is still going to reside on a physical storage device of some kind. Therefore, the physical data protection policies are going to have to match up with how the storage is presented virtually to the workloads.
- Security – Today, many organizations implement a form of security by simply controlling which VLANs exist between various hosts and virtual machines, the idea being that host A simply cannot talk to host B; then, if something infects host A, that infection cannot spread to host B. But in the SDDC the network is virtualized, and VLANs are no longer the mechanism by which virtual connectivity is established. The same will be true for storage configuration. Therefore security software will have to be significantly improved to be able to monitor what is talking to what and when inappropriate access is occurring. The frequency with which this will need to monitored will all by itself create the need for a big data datastore to cope with the volume and arrival rate of the security logging data.
- Operations Management – In a system as dynamic as an SDDC, operations management solutions are going to have to collect resource utilization and configuration data with fine granularity and high frequency. Five minutes will be an eternity in the SDDC, and near one second granularity of management data collection will likely become necessary.
- Infrastructure Performance Management – In an SDDC, it will be impossible to infer the performance of the infrastructure by looking at its resource utilization. The only way to understand the performance of an SDDC will be to measure how long it is taking to do what is being asked of it. End-to-end infrastructure latency will become the metric by which data center performance is measured and understood.
- Application Performance Management – In an SDDC, it will also be impossible to infer the performance of applications from their resource utilization. In order for application owners to be willing to tolerate the operation of their application in a dynamic and shared environment like the SDDC, every production application (custom developed and purchased) will have to be instrumented for response time and throughput. A whole new generation of APM solutions that automatically discover applications, discover and map their topology, and measure end-t0-end response time with zero manual configuration will be needed.
- Cloud Management – Cloud management solutions to date have done a great job of automating the deployment of tactical and transient workloads. In the SDDC, cloud management will be the layer of software that manages the deployment of every application in the environment. Think SAP in your private cloud.
- IT Automation – The only way for human beings to be able to keep up with a system as dynamic as the SDDC is for the humans to have a system that allows them to declare how they want the SDDC to be configured and provisioned and to have that system create that desired state and maintain it. This requires a declarative model for the automated configuration of every layer of the SDDC and a declarative model for the installation of every software component in the SDDC.
- Self-Learning Analytics – Just as humans are not going to be able to keep up with configuration and provisioning on a manual basis, there is no way that humans are going to be able to do manual root cause analysis when things go wrong. Self-learning analytics are going to have to replace the manual and time-consuming “blamestorming” meeting. This will likely be a welcome relief for the participants in those meetings and their management.
- Big Data Repository – In the SDDC, every layer of hardware underlying the SDDC, the SDDC software itself, and every layer of management software for the SDDC will be generating data at a rate that will demand a big data datastore just to keep up with and index the data so that bits from the various sources can in fact be compared to each other at the right time. The SDDC itself and each of the management software layers are going to have to feed this datastore. The self-learning analytics are going to have to consume this data and automatically and preemptively find problems long before the humans realize that they exist.
Who is Going to Provide this SDDC Management Stack?
You are going to have at least three options for how to build your SDDC management stack:
- The single-vendor option – certainly VMware has every intention of being your one-stop management software vendor for your SDDC.
- The ecosystem option – One or more vendors may decide to become a “management platform” and then build an ecosystem of third party vendors around their platform. Microsoft has this model with SCOM, but SCOM itself is light years from being ready to be an SDDC management platform. Splunk may well decide to go down this road.
- The best-of-breed option – you will certainly have the option of assembling your own management stack for your SDDC. In subsequent posts we will provide you with short lists of vendors that you should consider for each of the vertical and horizontal layers in the diagram above.
The Software Defined Data Center (SDDC) will require a completely new management stack – one suited to the highly dynamic nature of an environment in which every key resource (CPU, memory, networking and, storage) is abstracted from its underlying hardware.