Someone suggested “the next-generation data center” to me as a topic for an upcoming panel discussion. Here are my thoughts on the subject.
Articles Tagged with SDDC
A few weeks ago, Hany Michael released a blog post on his NSX lab network. Embedded within is one of the most brilliantly clear diagrams of a very complex situation I’ve ever seen. It takes a level of skill to achieve the clarity of this diagram. What hit me, though, is the sheer level of complexity that Hany conveys in this document and how that complexity is inherent to the SDDC. It’s easy to argue that the diagram shows the smallest possible instance of an SDDC (except it skims over the storage). Not too surprising, as it’s an SDDC lab. It’s inherently VMware focused, but it could be applied to Hyper-V or OpenStack easily. Each function in the diagram would still be necessary, although some would switch or merge. This article will be quite VMware focused for this reason.
For the last eighteen months, VMware has been pushing NSX as the third pillar of its software-defined data center (SDDC). NSX has three big selling points that VMware promotes: taking control of the network, automation and orchestration, and microsegmentation. The first two are standard SDDC fare: first, pull the function into software, abstract where necessary, and orchestrate to bring operational advantage; second, break down silos and allow a more agile approach. But the last, microsegmentation, is a good place to focus for a moment.
There are three pillars to the software-defined data centre (SDDC): software-defined compute, software-defined storage, and software-defined networking. Without any one of these three, the whole edifice of the data centre falls down. We build all three to be resilient, “designed for failure,” and robust. Each can be built and rebuilt from scripts that are stored in distributed version control systems. But at the bottom of every application stack in our SDDC, there is a database or file store that cannot—by definition—be re-created from scripts. This is the core data that we mine and make profit from. What happens if (or when) the edifice collapses? How is that core data protected, and is traditional backup up to the task?
Welcome to The Virtualization Practice’s week-long coverage of VMworld US 2015. Tune in all week for our daily recap of the major announcements and highlights from the world’s premier virtualization and cloud conference.
VMworld US 2015 continued in force yesterday, beginning with a long but powerful general session/keynote talk. Carl Eschenbach, VMware’s president and COO, set the stage for a slew of announcements around VMware’s “One Cloud, Any Application, Any Device” approach to computing and a seamless federation of all types of clouds, supporting both traditional and new cloud-native applications. A variety of VMware leaders joined him on stage to talk about the various aspects of these announcements and how they mesh with their overall strategy. While each of these areas could give rise a whole series of posts by themselves, I’ll summarize the major points.
As technologists and analysts for the virtualization and cloud spaces, we are always talking about various places within the IT stack. Actually, as we talked about within the article Technical Arc of Virtualization, we have noticed that many people are moving up the IT stack, forming new and more interesting substrates of IT. These substrates are used to simplify the actions one takes to deploy new and more interesting applications, while at the same time abstracting away the physical and virtual layers of the stack—in essence, forming new substrates on top of which to build.