Is the Software Defined Data Center the Future?

VMware purchased Nicira, backed the Openflow Community, and is now touting software defined data centers (SDDC).  But what is a software defined datacenter? Is it just virtualization or cloud with a software defined network? Or is it something more than that? Given heavy automation and scripting of most clouds, do we not already have SDDC? If not where are we going with this concept? What does SDN add to the mix?

Let us look at cloud as it is today: In essence today we can define a set of workloads, some networking, and security constructs all within a simple interface. We can pre-define these workloads today and allow a user to select from a menu and deploy those workloads. All of this is automatically performed and orchestrated, including the setup of networking, security controls, management registrations, etc.  Granted, at the definition of the workloads and automation tasks are a manual (but still via software) tasks performed by IT. Yet they present this to the user populace as a Service (ITaaS). Some of these services go so far as to let  you select the various applications to deploy and will build the underlying VMs for you. Yet it is still ITaaS and the IT department must pre-define each workload or application.

To me this is a software defined data center. I can literally, spin up a new cloud presence within a cloud provider within moments. Is that not an extension of my datacenter or a hybrid cloud construction? I can also do the same within my enterprise datacenter. But the rub is, what happens if I suddenly use up all my capacity in one location, can I currently automatically spin up the necessary workloads for peak utilization and spin them back down when the peaks are over? If you use the proper automation you can today, but it is a very manual process to define what automation is required.

Usually that automation is specific to an application or cloud and generally does not work across clouds or datacenters very well, perhaps we need something more. All administrators have scripts and via tools like Puppet those scripts can be generalized for non-specific use cases, however, is this what we need to make SDDC work, a library of generalized or even specific automation scripts and tools? I think not. I think we are headed into a very different direction.

Software Defined Data Center

The software defined data center of the future I believe would be based solely on business logic that was codified in a form that could be picked up by  a tool that can convert the natural language of business logic into a Backus–Naur Form (BNF) that could then be parsed and compiled by the automation suite in use. That compiled form would then be the definition for a data center. Actually, the term data center will change to imply any containerized collection of  systems, networks, security, etc. and not the physical form of a data center.

Virtualization has moved us along so that the concept of virtual data centers is common place. A Software Defined Data Center is a virtual data center + automation in an easy to use form. It may not be easy to understand, but the usage is click a button.

Think of it this way, the business decides it is launching a new product, so what does it need? Some form of advertisement as well as inventory/order control and the all important web and social media presence. But perhaps there are some specifics about this that are important, such as anything related to shipping cross country and customs inspections. So tracking an order to its final destination would also be important. So how do they achieve this today?

For the compute side, there is quite a bit of tweaking required as we need to select shipping agents, check out tracking, handle customs, place orders, and finally get our orders from the consumer. There are two parts to this type of SDDC, the first is to parse the business logic and translate it into something that computer systems can understand.  I.e take the natural language and translate it to a BNF. The second part would be to take the BNF and run it through its paces to create the appropriate secure networks, workloads, and perhaps even start up manufacturing.

To get from where we are today to something like what I described, there is still an awful lot of work to do, we need to first convert all our point automation solutions to generalized tools, and someone needs to step up and combine everything into one easy to use package, that also has the capability to plugin new workloads, tools, languages, etc. as needed. In addition, the BNF to be used should be standardized across all vendors to make definitions much easier, transportable between clouds,  repeatable, and contain all the necessary compliance and security precautions required by the data used by the business.

ITaaS will change over time, but getting there will not leave IT out of the equation, nor will it after we get to a fully implemented SDDC. Someone has to write those plugins, security controls, customer facing frameworks and templates,  as well as create new workloads, but most important, they are needed to fix things when they are broken.

Posted in IT as a Service, SDDC & Hybrid Cloud, Security, Transformation & AgilityTagged , , , , , , , ,