For quite a number of years, VMware has made it very clear that it views virtualization not only as a technology that provides significant benefits to data centers, but also a technology that disrupts the existing virtualization management solutions, and opens an opportunity for new management solutions to be offered and adopted by enterprises. VMware has also made it clear that it intends to capitalize upon this opportunity by fielding a family of strong products in the Virtualization Management area. Continue reading VMware’s Heterogeneous Virtualization Management Strategy
There are almost certainly a set of significant strategic reasons why management moves are being made and VMware and EMC. One reason might be that significant public clouds are being built with non-VMware software, an issue explored in this post about Microsoft’s Azure Strategy. If it is true that in order to succeed as a public cloud platform different requirements need to be met than what are needed to succeed with enterprise data center virtualization, then those two things may be diverging, as we explored in this post. VMware and EMC have also both separately given us a significant hint as to another driving force behind these changes, the Software Defined Data Center.
The Software Defined Data Center
Let’s first start with Steve Herrod the CTO of VMware. In this GigaOm article, he explained, “software-defined data centers are “generation-proof.” They collapse disparate systems into a singularity built atop commodity x86 processors and other gear. Software provides everything that’s needed to adapt the data center to new situations and new applications, and to manage everything from storage to switches to security. Although VMware will always work with hardware partners, Herrod said, “If you’re a company building very specialized hardware … you’re probably not going to love this message.”
A Singularity – Smarter Than You Are (Click to Enlarge)
Let’s move onto a statement that Pat Gelsinger made in the press release from VMware announcing the management changes, “The next generation of software defined-datacenters will be built by combining software with standardized hardware building blocks. VMware is uniquely positioned to be the leader in this endeavor and deliver a whole new level of value to customers and its existing important ecosystem partners. For more than ten years I have interacted with the team at VMware and have developed a deep appreciation for the people and technology of this remarkable company. I am really excited about the mission and becoming part of the team.”
If we carefully parse these words, some things pop out:
- Both Herrod and Gelsinger said “software-defined data center“. They did not say “software-defined cloud“. Now this may seem like splitting hairs and since all public clouds run in data centers maybe it is. But the focus upon data centers is notable.
- It is clear that both Herrod and Gelsinger are talking about moving the configuration management, policy management and provisioning of hardware out of the hardware itself and into the software – software written by and sold by VMware.
- VMware has already done this in vSphere for CPU and memory. It has done so partially for networking with the vSwitch, and Cisco has participated in this trend with the 1000V.
- There were tantalizing hints at VMworld last year about VMware doing this with storage as well. Think about all of the features that differentiate enterprise storage from commodity storage and move the management of all of those features into vSphere so that can be done consistently across all vendors of storage. Great idea if you are VMware – maybe not so great an idea if you are EMC.
- Specialized hardware is out. Commodity hardware configured for the needs of the workloads by vSphere is in. Taken to its extreme, this means that VMware is going to suck all of the value that differentiates enterprise networking (Cisco) and enterprise storage (EMC) out of those hardware platforms and put it in their software.
- With the software-defined data center, the economics, flexibility, agility, and scalability of public cloud data centers gets brought to enterprise data centers, but in an enterprise credible and enterprise viable manner.
VMware’s new Data Center Virtualization Strategy?
So is the “Software-Defined Data Center” VMware’s new data center virtualization strategy? I guess we will all have to attend VMworld to find out. But this seems like an entirely plausible extension of what VMware has been up to for years anyway. This really just amounts to taking what VMware has already done with CPU and memory, adding “software-defined networking” to it, and then addressing storage (the really hard part).
Is there anything new here? In terms of actual technical emphasis, probably not much. In terms of customer and market focus, probably a great deal. For the last couple of years, when all of the enterprise data center folks came to VMworld and VMware talked about “clouds” many of them rolled their eyes. They really did not buy vSphere to build a “cloud”. They bought vSphere because it helped them run their data centers more effectively and efficiently, and provide better and more flexible service to their business constituents. Just having VMware talk about “data centers” instead of “clouds” will probably be a welcome relief to VMware’s core customer base.
At the CIO level, this is even more important. CIO’s (most of them) do manage IT as a business and are keenly aware of the tradeoffs between the cost of doing something and the value of doing it. Many CIO’s have been quoted as having “Google envy”, which means that they would love to have cost of operations, flexibility, admin to server ratio’s, elasticity, and scalability in their data centers that Google has. But those very same CIO’s know that their workloads have unique requirements – so they need those public cloud economics in a way that actually works for their workloads – which means those economics for their data centers – not putting their workloads in a public cloud.
Public Cloud Economics for the Enterprise Data Center
So at the end of the day, what the “software-defined data center” is promising is that you, the CIO of an enterprise data center can ultimately have the same operational flexibility and economics as Google, Amazon, and the Windows Azure public cloud. This is a great promise, but let’s step back and think about how hard this is to do.
As an experiment, imagine that you built your data centers the way that Google and Amazon build theirs (scale out commodity hardware), and then use (for example) Amazon AWS as the software layer in that data center. Would that work? Probably not and here is why:
- You have extensive infrastructure monitoring in place to make sure that your infrastructure is both available and well performing for your business critical workloads. Existing public cloud computing platforms have extremely lightweight (in some cases non-existent) facilities to ensure infrastructure availability and performance.
- You bought some very specialized hardware for some very good reasons. Those fiber channel SAN’s attached to those expensive enterprise storage arrays are there for a reason. All of the “cloud software” in the world is not going to turn commodity hardware into something that meets the needs of the applications that rely upon this infrastructure.
- Things are extensively separated and segregated for a reason. Workload A cannot be allowed to interfere with workload B. The people using the data for workload A cannot be allowed to have any kind of access to the data for workload B. This reduces down to multi-tenant security and it is a problem that public cloud vendors have not solved – so using their software in your data center is not going to automatically solve it for you.
The above experiment illustrates why the people in charge of enterprise IT workloads are not enthusiastic about moving those workloads to public clouds. They know that it will not work no matter how much hype there is behind the concept.
The Software Defined Data Center – An Enterprise IT Problem
What the above example should make clear is that delivering the economics of public cloud computing to enterprises via a software defined data center is a unique problem. It cannot be solved by simply taking what has worked to date in public clouds and applying it to enterprise data centers. It is going to require a tremendous software engineering effort focused upon surfacing the unique features that enterprise workloads need in the virtualization platform so that they can be consistently used across hardware that is more commodity and more homogenized than is the case with enterprise hardware today.
This then goes back to our post about the divergence of Data Center Virtualization from Public Cloud Computing. These things may share some attributes, but they are going to continue to diverge for the following reasons:
- Public cloud computing is all about economics, flexibility, scalability, and elasticity. It is not about backwards compatibility with enterprise workloads.
- Public cloud computing is all about the “new”. New applications written with new tools, running on new frameworks, using new public cloud computing services.
- Public cloud infrastructures are horrible about performance and availability assurance and this is not likely to change soon. It is the job of the application developer to code around the vagaries of the public cloud. Go read the Netflix blog about the Chaos Monkey.
- No one in the enterprise is going to rewrite all of their applications to be able to survive the Chaos Monkey, or its real world equivalents
- The SLA’s regarding performance and availability in place in the enterprise are not going to go away. In fact they will need to be significantly strengthened as workloads move from static to dynamic systems.
- The security and compliance requirements for data, personally identifiable information, and who has access to what are significant and are not going to abate. Public clouds are not going to catch up with what is the accepted norm here for quite some time if ever (see all about the new in #2 above).
- There is no way of getting away from the “old” in enterprise IT. Those legacy applications are there, and they are going to continue to need to be supported, and no one is going to rewrite them because either the source code has been lost, or the people who wrote the code are no longer with the company.
The new VMware Vision?
VMware is a great company. Great companies need great missions to embark upon and great problems to solve. The software defined data center is a great vision. Making it work for enterprise IT is a really hard problem – one worthy of all of the smart engineers and managers at VMware. If this strategy and this new enterprise data center focus is what VMworld is going to be about, and this is what is at least in part behind these management moves then this is a very good thing for VMware. It is also going to set up the battle royal of conflicting visions – as Microsoft, Google and Amazon are all starting at the public cloud and moving towards the data center, while VMware appears to be doing the exact opposite.
Update from 7/23/2012 VMware Earnings Call
The big news is the acquisition of Nicira a network virtualization vendor. Nicira was acquired for $1.05B, VMware’s most expensive acquisition to date, and was positioned as “key to the software defined data center”. Details include the ability to significantly automate the management of networking with a network virtualization layer that spans multiple physical switches and multiple virtualization platforms. Nicira is now the second cross-platform solution that VMware has acquired in the last two weeks, with DynamicOps taking VMware into clouds that span both multiple virtualization platforms and virtual/physical infrastructures.
With Dell buying Quest, and VMware buying DynamicOps, the Virtualization Management landscape has been forever changed. Now Dell is a full fledged systems management vendor, and VMware has crossed the line into managing both their own and other hypervisors, and being able to construct clouds that even include non-virtualized resources. This gives rise to a very interesting question. Are Dell and VMware turning into traditional systems management vendors like CA, IBM, HP, and BMC or are they preparing to disrupt the existing systems management business just like VKernel (part of Quest, and now Dell) and DynamicOps did when they were startups? Continue reading Virtualization Management – VMware and Dell, Big 2 or Big 6?
VMware, the global leader in virtualization and cloud infrastructure, today announced that it has signed a definitive agreement to acquire DynamicOps, Inc., a provider of cloud automation solutions that enable provisioning and management of IT services across heterogeneous environments — VMware-based private and public clouds, physical infrastructures, multiple hypervisors and Amazon Web Services. Terms of the acquisition were not announced. The acquisition is scheduled to close in Q3 2012 subject to customary closing conditions. Continue reading News: VMware to Acquire DynamicOps, Inc.
On July 2nd 2012, Dell announced that it has entered into a definitive agreement to buy Quest Software. Quest will become part of the Dell Software Group, which is being run by John Swainson, formerly the CEO of CA. In “Dell a Virtualization Management Leader?” posted almost a year ago, we explored how Dell might combine the product assets that it has licensed from DynamicOps (sold by Dell as VIS Creator – see the product review here). The basic idea was the monitoring of the virtualized environment would be combined with the ability of VIS Creator to dynamically provision services so that dynamically provisioned services could be offered with performance and availability assurances. The idea that Dell could bring the entire portfolio of Quest assets to bear fundamentally transforms both the notion of automated service assurance of dynamically provisioned services, and the entire systems management business. Continue reading News: Dell Transforms Virtualization Management with Quest Acquisition
Microsoft threw down the gauntlet today, right at the feet of Amazon’s AWS – launching a revamped PaaS offering, a brand new IaaS offering (run whatever you want in an Azure hosted image), and significant partnerships with ecosystem vendors that will add value to Azure and round out its value with Microsoft Azure customers.