Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, (Read More)

VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

The Software Defined Data Center

Software.Defined.Data.CenterThere are almost certainly a set of significant strategic reasons why management moves are being made and VMware and EMC. One reason might be that significant public clouds are being built with non-VMware software, an issue explored in this post about Microsoft’s Azure Strategy. If it is true that in order to succeed as a public cloud platform different requirements need to be met than what are needed to succeed with enterprise data center virtualization, then those two things may be diverging, as we explored in this post. VMware and EMC have also both separately given us a significant hint as to another driving force behind these changes, the Software Defined Data Center.

The Software Defined Data Center

Let’s first start with Steve Herrod the CTO of VMware. In this GigaOm article, he explained, “software-defined data centers are “generation-proof.” They collapse disparate systems into a singularity built atop commodity x86 processors and other gear. Software provides everything that’s needed to adapt the data center to new situations and new applications, and to manage everything from storage to switches to security. Although VMware will always work with hardware partners, Herrod said, “If you’re a company building very specialized hardware … you’re probably not going to love this message.”

A Singularity – Smarter Than You Are (Click to Enlarge)

Let’s move onto a statement that Pat Gelsinger made in the press release from VMware announcing the management changes, “The next generation of software defined-datacenters will be built by combining software with standardized hardware building blocks. VMware is uniquely positioned to be the leader in this endeavor and deliver a whole new level of value to customers and its existing important ecosystem partners.  For more than ten years I have interacted with the team at VMware and have developed a deep appreciation for the people and technology of this remarkable company. I am really excited about the mission and becoming part of the team.”

If we carefully parse these words, some things pop out:

  1. Both Herrod and Gelsinger said “software-defined data center“. They did not say “software-defined cloud“. Now this may seem like splitting hairs and since all public clouds run in data centers maybe it is. But the focus upon data centers is notable.
  2. It is clear that both Herrod and Gelsinger are talking about moving the configuration management, policy management and provisioning of hardware out of the hardware itself and into the software – software written by and sold by VMware.
  3. VMware has already done this in vSphere for CPU and memory. It has done so partially for networking with the vSwitch, and Cisco has participated in this trend with the 1000V.
  4. There were tantalizing hints at VMworld last year about VMware doing this with storage as well. Think about all of the features that differentiate enterprise storage from commodity storage and move the management of all of those features into vSphere so that can be done consistently across all vendors of storage.  Great idea if you are VMware – maybe not so great an idea if you are EMC.
  5. Specialized hardware is out. Commodity hardware configured for the needs of the workloads by vSphere is in. Taken to its extreme, this means that VMware is going to suck all of the value that differentiates enterprise networking (Cisco) and enterprise storage (EMC) out of those hardware platforms and put it in their software.
  6. With the software-defined data center, the economics, flexibility, agility, and scalability of public cloud data centers gets brought to enterprise data centers, but in an enterprise credible and enterprise viable manner.

VMware’s new Data Center Virtualization Strategy?

So is the “Software-Defined Data Center” VMware’s new data center virtualization strategy? I guess we will all have to attend VMworld to find out. But this seems like an entirely plausible extension of what VMware has been up to for years anyway. This really just amounts to taking what VMware has already done with CPU and memory, adding “software-defined networking” to it, and then addressing storage (the really hard part).

Is there anything new here? In terms of actual technical emphasis, probably not much. In terms of customer and market focus, probably a great deal. For the last couple of years, when all of the enterprise data center folks came to VMworld and VMware talked about “clouds” many of them rolled their eyes. They really did not buy vSphere to build a “cloud”. They bought vSphere because it helped them run their data centers more effectively and efficiently, and provide better and more flexible service to their business constituents. Just having VMware talk about “data centers” instead of “clouds” will probably be a welcome relief to VMware’s core customer base.

At the CIO level, this is even more important. CIO’s (most of them) do manage IT as a business and are keenly aware of the tradeoffs between the cost of doing something and the value of doing it. Many CIO’s have been quoted as having “Google envy”, which means that they would love to have cost of operations, flexibility, admin to server ratio’s, elasticity, and scalability in their data centers that Google has. But those very same CIO’s know that their workloads have unique requirements – so they need those public cloud economics in a way that actually works for their workloads – which means those economics for their data centers – not putting their workloads in a public cloud.

Public Cloud Economics for the Enterprise Data Center

So at the end of the day, what the “software-defined data center” is promising is that you, the CIO of an enterprise data center can ultimately have the same operational flexibility and economics as Google, Amazon, and the Windows Azure public cloud. This is a great promise, but let’s step back and think about how hard this is to do.

As an experiment, imagine that you built your data centers the way that Google and Amazon build theirs (scale out commodity hardware), and then use (for example) Amazon AWS as the software layer in that data center. Would that work? Probably not and here is why:

  • You have extensive infrastructure monitoring in place to make sure that your infrastructure is both available and well performing for your business critical workloads. Existing public cloud computing platforms have extremely lightweight (in some cases non-existent) facilities to ensure infrastructure availability and performance.
  • You bought some very specialized hardware for some very good reasons. Those fiber channel SAN’s attached to those expensive enterprise storage arrays are there for a reason. All of the “cloud software” in the world is not going to turn commodity hardware into something that meets the needs of the applications that rely upon this infrastructure.
  • Things are extensively separated and segregated for a reason. Workload A cannot be allowed to interfere with workload B. The people using the data for workload A cannot be allowed to have any kind of access to the data for workload B.  This reduces down to multi-tenant security and it is a problem that public cloud vendors have not solved – so using their software in your data center is not going to automatically solve it for you.

The above experiment illustrates why the people in charge of enterprise IT workloads are not enthusiastic about moving those workloads to public clouds. They know that it will not work no matter how much hype there is behind the concept.

The Software Defined Data Center – An Enterprise IT Problem

What the above example should make clear is that delivering the economics of public cloud computing to enterprises via a software defined data center is a unique problem. It cannot be solved by simply taking what has worked to date in public clouds and applying it to enterprise data centers. It is going to require a tremendous software engineering effort focused upon surfacing the unique features that enterprise workloads need in the virtualization platform so that they can be consistently used across hardware that is more commodity and more homogenized than is the case with enterprise hardware today.

This then goes back to our post about the divergence of Data Center Virtualization from Public Cloud Computing. These things may share some attributes, but they are going to continue to diverge for the following reasons:

  1. Public cloud computing is all about economics, flexibility, scalability, and elasticity. It is not about backwards compatibility with enterprise workloads.
  2. Public cloud computing is all about the “new”. New applications written with new tools, running on new frameworks, using new public cloud computing services.
  3. Public cloud infrastructures are horrible about performance and availability assurance and this is not likely to change soon. It is the job of the application developer to code around the vagaries of the public cloud. Go read the Netflix blog about the Chaos Monkey.
  4. No one in the enterprise is going to rewrite all of their applications to be able to survive the Chaos Monkey, or its real world equivalents
  5. The SLA’s regarding performance and availability in place in the enterprise are not going to go away. In fact they will need to be significantly strengthened as workloads move from static to dynamic systems.
  6. The security and compliance requirements for data, personally identifiable information, and who has access to what are significant and are not going to abate. Public clouds are not going to catch up with what is the accepted norm here for quite some time if ever (see all about the new in #2 above).
  7. There is no way of getting away from the “old” in enterprise IT. Those legacy applications are there, and they are going to continue to need to be supported, and no one is going to rewrite them because either the source code has been lost, or the people who wrote the code are no longer with the company.

The new VMware Vision?

VMware is a great company. Great companies need great missions to embark upon and great problems to solve. The software defined data center is a great vision. Making it work for enterprise IT is a really hard problem – one worthy of all of the smart engineers and managers at VMware.  If this strategy and this new enterprise data center focus is what VMworld is going to be about, and this is what is at least in part behind these management moves then this is a very good thing for VMware. It is also going to set up the battle royal of conflicting visions – as Microsoft, Google and Amazon are all starting at the public cloud and moving towards the data center, while VMware appears to be doing the exact opposite.

Update from 7/23/2012 VMware Earnings Call

The big news is the acquisition of Nicira a network virtualization vendor. Nicira was acquired for $1.05B, VMware’s most expensive acquisition to date, and was positioned as “key to the software defined data center”. Details include the ability to significantly automate the management of networking with a network virtualization layer that spans multiple physical switches and multiple virtualization platforms. Nicira is now the second cross-platform solution that VMware has acquired in the last two weeks, with DynamicOps taking VMware into clouds that span both multiple virtualization platforms and virtual/physical infrastructures.

Maritz/Gelsinger – Virtualization and the Cloud Diverging?

CloudComputingSo far we know this much for sure. Pat Gelsinger from EMC is going to replace Paul Maritz as CEO of VMware, and Paul Maritz is going to become Chief Strategist at EMC. It is also fairly likely that there is much that we do not know, and in fact it is likely that we cannot even firmly list thing things that we do not know (we do not know what we do not know (and you thought only the government had that problem)). Continue reading Maritz/Gelsinger – Virtualization and the Cloud Diverging?

VMware’s Executive Shuffle! Maritz Out and Gelsinger In

VMware100x30VMworld 2012 is right around the corner and the time leading up to the conference is usually the time major announcements are made about new technology and/or new products to strike interest in different technologies that will be presented at the conference.  Today a different kind of announcement has been made, VMware’s Executive Shuffle in that Paul Maritz is no longer VMware CEO, and he is being replaced by Pat Gelsinger, President and COO of EMC’s Information Infrastructure Product division. Mr. Maritz will become a Vice Chairman at EMC, although I am still up in the air on whether that means Paul is on his way out of EMC or looking to lead somewhere else?  Maybe one could speculate Cloud Foundry? Continue reading VMware’s Executive Shuffle! Maritz Out and Gelsinger In

Storage Hypervisors: Worth the Hype

StorageNetworkingJust what are storage hypervisors? There are several companies that claim to have storage hypervisors. Wikipedia states that  a hypervisor is “conceptually one level higher than a supervisory program”. We also know that from our normal use of hypervisors that they manage the underlying resources that a guest uses. Do these definitions work for a storage hypervisor? Continue reading Storage Hypervisors: Worth the Hype

Virtualization Management – VMware and Dell, Big 2 or Big 6?

Virtualization ManagementWith Dell buying Quest, and VMware buying DynamicOps, the Virtualization Management landscape has been forever changed. Now Dell is a full fledged systems management vendor, and VMware has crossed the line into managing both their own and other hypervisors, and being able to construct clouds that even include non-virtualized resources. This gives rise to a very interesting question. Are Dell and VMware turning into traditional systems management vendors like CA, IBM, HP, and BMC or are they preparing to disrupt the existing systems management business just like VKernel (part of Quest, and now Dell) and DynamicOps did when they were startups? Continue reading Virtualization Management – VMware and Dell, Big 2 or Big 6?

VMware Sponsors the Open Networking Research Center

DataCenterVirtualizationIn my humble opinion, 2011 was the year for storage inside the virtualization space with a lot of new storage related technologies presented at VMworld 2011. There were different technologies ranging from Tier 1 SSD storage in a box that can plug right into VMware vSphere as its own datastore, to all the software storage venders that are now virtualizing their Storage Processers.  Yes, for me, 2011 was the year for storage in virtualization.  Continue reading VMware Sponsors the Open Networking Research Center