The Software Defined Data Center (SDDC) will be a highly dynamic environment with constantly changing configuration and resource allocation settings driven by various forms of automation including the provisioning of workloads from service catalogs, the scaling of workloads in response to demand, and the migration of workloads across hosts for workload balancing and prioritization reasons. Due to Agile Development, applications are now changing more quickly than ever before. So we are going to have rapidly changing applications running on a rapidly changing software infrastructure. This will drive the need for SDDC Application Performance Management.
Articles Tagged with software defined data center
While VMware is still the undisputed leader in enterprise data center virtualization, it is also very obvious that Microsoft has made (and continues to make) significant inroads into both the broader data center virtualization market and into VMware’s own enterprise customer base. The general perception is that Microsoft Hyper-V is now “good enough” to run most production workloads, that it is close enough (or at parity) in functionality and performance to vSphere for customers to be able to move workloads from vSphere to Hyper-V, and that vSphere is “expensive” and Hyper-V is “free”. So how will VMware win against Microsoft?
The OpenStack Summit this week continued to fan the flames of the software-defined data center. The software-defined data center is just a term for replacing traditional data center hardware functionality with the same features implemented in software, running on commodity x86 servers. While software-defined approaches to data center features are at least nominally less expensive than their hardware counterparts the real promise in the approach is flexibility and management ease with high levels of integration. Reconfiguring a network to support the security requirements of a new application is now just a function of software and APIs. Expanding storage is just simply adding another node with more storage attached, and the cluster compensates automatically. Even things like firewall rules and load balancer configurations can now be stored as templates along with the applications, to be provisioned in minutes.
In “Building a Management Stack for Your Software Defined Data Center“, we presented a reference architecture for how one could build a new management stack for the Software Defined Data Center. In “SDDC Operations Management“, we discussed why the SDDC will require a new and different class of Operations Management solutions and laid out a process for selecting vendors to fill the Operations Management role.
One aspect of SDDC that does not get a lot of attention is Data Protection, instead we are concentrating on SDN and automation. Yet, this leads me to Data Protection. There is a clear marriage between Data Protection and SDDC that needs to be added to any architecture. As with all things, we start with the architecture. Our SDDC architecture should also include data protection, but what data are we really protecting? Within SDDC there are three forms of data: tenant, configuration, and automation. Without one or the other, we may not be able to reload our SDDC during a disaster. What is required to get these three types of data, what really are these types of data? and how can we add data protection into SDDC cleanly?
Data Center Virtualization has spawned several entirely new categories and variants of management software. This is largely because data center virtualization alone was a large enough change to create new requirements that legacy management products could not meet. This created a new constituency for management solutions—the virtualization team—which proceeded to purchase management solutions that met their needs. This trend was facilitated by the “easy to try and easy to buy” business model that many of the new vendors of virtualization management solutions adopted. Out of this a new management software industry arose.