All posts by Bernd Harzog

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

VMware’s Heterogeneous Virtualization Management Strategy

VirtualizationManagementIconFor quite a number of years, VMware has made it very clear that it views virtualization not only as a technology that provides significant benefits to data centers, but also a technology that disrupts the existing virtualization management solutions, and opens an opportunity for new management solutions to be offered and adopted by enterprises. VMware has also made it clear that it intends to capitalize upon this opportunity by fielding a family of strong products in the Virtualization Management area. Continue reading VMware’s Heterogeneous Virtualization Management Strategy

Here Comes the Heterogeneous Distributed Enterprise Cloud

ITasaServiceVMware’s purchase of DynamicOps signaled a major shift in both VMware’s strategy, and in the market for cloud management solutions. Previously VMware strategy (with vCloud Director) was focused mainly upon addressing development, test, pilot and training type use cases on its own vSphere platform. This relegated clouds to tactical and transient use cases which while important for many enterprise organizations, are not the bread and butter use cases that drive IT Operations day in and day out. Now here comes the enterprise cloud. Continue reading Here Comes the Heterogeneous Distributed Enterprise Cloud

News: Oracle Buys Xsigo Systems

Coming on the heels of VMware’s acquisition of Nicira, Oracle announced today that it is acquiring network virtualization vendor Xsigo Systems for an undisclosed amount. So now two shoes have dropped in the question of how networks will be designed and operated in the future (perhaps the entity in question is an octopus, and we have six shoes to go). Clearly the notion of software defined networks has legs and clearly VMware is not the only company who sees this.

The Oracle Announcement

Oracle Buys Xsigo

Extends Oracle’s Virtualization Capabilities with Leading Software-Defined Networking Technology for Cloud Environments

  • Oracle today announced that it has entered into an agreement to acquire Xsigo Systems, a leading provider of network virtualization technology.
  • Xsigo’s software-defined networking technology simplifies cloud infrastructure and operations by allowing customers to dynamically and flexibly connect any server to any network and storage, resulting in increased asset utilization and application performance while reducing cost.
  • The company’s products have been deployed at hundreds of enterprise customers including British Telecom, eBay, Softbank and Verizon.
  • The combination of Xsigo for network virtualization and Oracle VM for server virtualization is expected to deliver a complete set of virtualization capabilities for cloud environments.

Terms of the agreement were not disclosed. More information on this announcement can be found at oracle.com/xsigo.

Supporting Quotes

  • “The proliferation of virtualized servers in the last few years has made the virtualization of the supporting network connections essential,” said John Fowler, Oracle Executive Vice President of Systems. “With Xsigo, customers can reduce the complexity and simplify management of their clouds by delivering compute, storage and network resources that can be dynamically reallocated on-demand.”
  • “Customers are focused on reducing costs and improving utilization of their network,” said Lloyd Carney, Xsigo CEO. “Virtualization of these resources allows customers to scale compute and storage for their public and private clouds while matching network capacity as demand dictates.”

What Does This Mean?

The most disconcerting statement in the release is the part about the “combination of Xsigo and Oracle VM”. This means that Oracle is continuing to play its “vertically integrated solution stack” game, which is in direct contrast to the horizontally layered strategies that VMware, Microsoft, Red Hat, Citrix, the CloudStack community, and the OpenStack community are all pursuing. While this might be very appealing to a customer that is 100% or nearly 100% Oracle, the notion of jamming Oracle VM down the throat of a customer in order for them to get Xsigo is just another example of the foolishness of Oracle’s closed, proprietary and arrogant approach. This could not be more at odds with VMware’s notion of the Software Defined Data Center which is completely open with respect to the hardware layers underneath it and the workloads that run on it.

The Software Defined Data Center

Software.Defined.Data.CenterThere are almost certainly a set of significant strategic reasons why management moves are being made and VMware and EMC. One reason might be that significant public clouds are being built with non-VMware software, an issue explored in this post about Microsoft’s Azure Strategy. If it is true that in order to succeed as a public cloud platform different requirements need to be met than what are needed to succeed with enterprise data center virtualization, then those two things may be diverging, as we explored in this post. VMware and EMC have also both separately given us a significant hint as to another driving force behind these changes, the Software Defined Data Center.

The Software Defined Data Center

Let’s first start with Steve Herrod the CTO of VMware. In this GigaOm article, he explained, “software-defined data centers are “generation-proof.” They collapse disparate systems into a singularity built atop commodity x86 processors and other gear. Software provides everything that’s needed to adapt the data center to new situations and new applications, and to manage everything from storage to switches to security. Although VMware will always work with hardware partners, Herrod said, “If you’re a company building very specialized hardware … you’re probably not going to love this message.”

A Singularity – Smarter Than You Are (Click to Enlarge)

Let’s move onto a statement that Pat Gelsinger made in the press release from VMware announcing the management changes, “The next generation of software defined-datacenters will be built by combining software with standardized hardware building blocks. VMware is uniquely positioned to be the leader in this endeavor and deliver a whole new level of value to customers and its existing important ecosystem partners.  For more than ten years I have interacted with the team at VMware and have developed a deep appreciation for the people and technology of this remarkable company. I am really excited about the mission and becoming part of the team.”

If we carefully parse these words, some things pop out:

  1. Both Herrod and Gelsinger said “software-defined data center“. They did not say “software-defined cloud“. Now this may seem like splitting hairs and since all public clouds run in data centers maybe it is. But the focus upon data centers is notable.
  2. It is clear that both Herrod and Gelsinger are talking about moving the configuration management, policy management and provisioning of hardware out of the hardware itself and into the software – software written by and sold by VMware.
  3. VMware has already done this in vSphere for CPU and memory. It has done so partially for networking with the vSwitch, and Cisco has participated in this trend with the 1000V.
  4. There were tantalizing hints at VMworld last year about VMware doing this with storage as well. Think about all of the features that differentiate enterprise storage from commodity storage and move the management of all of those features into vSphere so that can be done consistently across all vendors of storage.  Great idea if you are VMware – maybe not so great an idea if you are EMC.
  5. Specialized hardware is out. Commodity hardware configured for the needs of the workloads by vSphere is in. Taken to its extreme, this means that VMware is going to suck all of the value that differentiates enterprise networking (Cisco) and enterprise storage (EMC) out of those hardware platforms and put it in their software.
  6. With the software-defined data center, the economics, flexibility, agility, and scalability of public cloud data centers gets brought to enterprise data centers, but in an enterprise credible and enterprise viable manner.

VMware’s new Data Center Virtualization Strategy?

So is the “Software-Defined Data Center” VMware’s new data center virtualization strategy? I guess we will all have to attend VMworld to find out. But this seems like an entirely plausible extension of what VMware has been up to for years anyway. This really just amounts to taking what VMware has already done with CPU and memory, adding “software-defined networking” to it, and then addressing storage (the really hard part).

Is there anything new here? In terms of actual technical emphasis, probably not much. In terms of customer and market focus, probably a great deal. For the last couple of years, when all of the enterprise data center folks came to VMworld and VMware talked about “clouds” many of them rolled their eyes. They really did not buy vSphere to build a “cloud”. They bought vSphere because it helped them run their data centers more effectively and efficiently, and provide better and more flexible service to their business constituents. Just having VMware talk about “data centers” instead of “clouds” will probably be a welcome relief to VMware’s core customer base.

At the CIO level, this is even more important. CIO’s (most of them) do manage IT as a business and are keenly aware of the tradeoffs between the cost of doing something and the value of doing it. Many CIO’s have been quoted as having “Google envy”, which means that they would love to have cost of operations, flexibility, admin to server ratio’s, elasticity, and scalability in their data centers that Google has. But those very same CIO’s know that their workloads have unique requirements – so they need those public cloud economics in a way that actually works for their workloads – which means those economics for their data centers – not putting their workloads in a public cloud.

Public Cloud Economics for the Enterprise Data Center

So at the end of the day, what the “software-defined data center” is promising is that you, the CIO of an enterprise data center can ultimately have the same operational flexibility and economics as Google, Amazon, and the Windows Azure public cloud. This is a great promise, but let’s step back and think about how hard this is to do.

As an experiment, imagine that you built your data centers the way that Google and Amazon build theirs (scale out commodity hardware), and then use (for example) Amazon AWS as the software layer in that data center. Would that work? Probably not and here is why:

  • You have extensive infrastructure monitoring in place to make sure that your infrastructure is both available and well performing for your business critical workloads. Existing public cloud computing platforms have extremely lightweight (in some cases non-existent) facilities to ensure infrastructure availability and performance.
  • You bought some very specialized hardware for some very good reasons. Those fiber channel SAN’s attached to those expensive enterprise storage arrays are there for a reason. All of the “cloud software” in the world is not going to turn commodity hardware into something that meets the needs of the applications that rely upon this infrastructure.
  • Things are extensively separated and segregated for a reason. Workload A cannot be allowed to interfere with workload B. The people using the data for workload A cannot be allowed to have any kind of access to the data for workload B.  This reduces down to multi-tenant security and it is a problem that public cloud vendors have not solved – so using their software in your data center is not going to automatically solve it for you.

The above experiment illustrates why the people in charge of enterprise IT workloads are not enthusiastic about moving those workloads to public clouds. They know that it will not work no matter how much hype there is behind the concept.

The Software Defined Data Center – An Enterprise IT Problem

What the above example should make clear is that delivering the economics of public cloud computing to enterprises via a software defined data center is a unique problem. It cannot be solved by simply taking what has worked to date in public clouds and applying it to enterprise data centers. It is going to require a tremendous software engineering effort focused upon surfacing the unique features that enterprise workloads need in the virtualization platform so that they can be consistently used across hardware that is more commodity and more homogenized than is the case with enterprise hardware today.

This then goes back to our post about the divergence of Data Center Virtualization from Public Cloud Computing. These things may share some attributes, but they are going to continue to diverge for the following reasons:

  1. Public cloud computing is all about economics, flexibility, scalability, and elasticity. It is not about backwards compatibility with enterprise workloads.
  2. Public cloud computing is all about the “new”. New applications written with new tools, running on new frameworks, using new public cloud computing services.
  3. Public cloud infrastructures are horrible about performance and availability assurance and this is not likely to change soon. It is the job of the application developer to code around the vagaries of the public cloud. Go read the Netflix blog about the Chaos Monkey.
  4. No one in the enterprise is going to rewrite all of their applications to be able to survive the Chaos Monkey, or its real world equivalents
  5. The SLA’s regarding performance and availability in place in the enterprise are not going to go away. In fact they will need to be significantly strengthened as workloads move from static to dynamic systems.
  6. The security and compliance requirements for data, personally identifiable information, and who has access to what are significant and are not going to abate. Public clouds are not going to catch up with what is the accepted norm here for quite some time if ever (see all about the new in #2 above).
  7. There is no way of getting away from the “old” in enterprise IT. Those legacy applications are there, and they are going to continue to need to be supported, and no one is going to rewrite them because either the source code has been lost, or the people who wrote the code are no longer with the company.

The new VMware Vision?

VMware is a great company. Great companies need great missions to embark upon and great problems to solve. The software defined data center is a great vision. Making it work for enterprise IT is a really hard problem – one worthy of all of the smart engineers and managers at VMware.  If this strategy and this new enterprise data center focus is what VMworld is going to be about, and this is what is at least in part behind these management moves then this is a very good thing for VMware. It is also going to set up the battle royal of conflicting visions – as Microsoft, Google and Amazon are all starting at the public cloud and moving towards the data center, while VMware appears to be doing the exact opposite.

Update from 7/23/2012 VMware Earnings Call

The big news is the acquisition of Nicira a network virtualization vendor. Nicira was acquired for $1.05B, VMware’s most expensive acquisition to date, and was positioned as “key to the software defined data center”. Details include the ability to significantly automate the management of networking with a network virtualization layer that spans multiple physical switches and multiple virtualization platforms. Nicira is now the second cross-platform solution that VMware has acquired in the last two weeks, with DynamicOps taking VMware into clouds that span both multiple virtualization platforms and virtual/physical infrastructures.

Maritz/Gelsinger – Virtualization and the Cloud Diverging?

CloudComputingSo far we know this much for sure. Pat Gelsinger from EMC is going to replace Paul Maritz as CEO of VMware, and Paul Maritz is going to become Chief Strategist at EMC. It is also fairly likely that there is much that we do not know, and in fact it is likely that we cannot even firmly list thing things that we do not know (we do not know what we do not know (and you thought only the government had that problem)). Continue reading Maritz/Gelsinger – Virtualization and the Cloud Diverging?

Microsoft’s Three Pronged Windows Azure Strategy

ITasaServiceBack in June Microsoft announced major enhancements to its Azure public cloud – adding a robust set of Infrastructure as a Service (IaaS) capabilities to the existing .NET based Platform as a Service (PaaS) capabilities. Now comes word that Microsoft will put the Azure Service Management Portal into the Service Provider Edition of Windows Server 2012, allowing service providers to offer their own branded Azure based clouds. This leads to one question on Microsoft’s Windows Azure Strategy. How much of Azure will be in the version of Windows Server 2012 destined for on premise enterprise use?

Microsoft’s Biggest Asset

Microsoft’s biggest asset is the installed base of its Windows Server operating systems, the associated Windows applications that run on those servers and the corresponding Windows Desktop operating systems and applications. If you look at Microsoft as a cloud vendor with this asset in their pocket and compare Microsoft to Amazon and VMware the differences are stark. Amazon has effectively zero on premise footprint unless you include the on premise installations of AWS compatible clouds from vendors like Eucalyptus. VMware claim over 250,000 customers, but the customer base for Windows Server is at least one order of magnitude (10x) larger.

It is also the case, the Microsoft has a massive presence at the application layer that neither VMware nor Amazon have. If you look at the installed bases of SQL Server, Exchange Server, IIS, and the multitude of other server based applications, Microsoft is far ahead of any of its rivals on this front.

The New Windows Azure Strategy

Let’s combine the two things that we do already know with some informed speculation on the third:

  1. We already know that Microsoft is positioning the Azure public cloud as a robust, flexible and open IaaS public cloud offering fully capable of competing on a head-to-head basis with Amazon AWS and anyone else (Google?) who may enter the public IaaS market in the future.
  2. We have just seen the announcement that hosting companies will be able to private label Azure and offer Azure IaaS and PaaS services to their customers through the integration of the Azure Service Management Portan and API to Windows Server 2012.
  3. There is only one missing piece – one that Microsoft has hinted at on many previous occasions. That is how much a Azure is going to be built into the version of Windows Server 2012 that will be sold for on premise use by enterprise customers world wide?

The Seamless On-Premise, Service Provider, Azure Public Cloud Strategy

Microsoft has already stated on numerous occasions that it is going to let customers set up where applications run in their on-premise installations of Active Directory and at the same time point each set of users to the appropriate installation (on premise, server provider, Azure Cloud) instance of those applications. Microsoft has taken steps to make migrating SQL Server databases between instances easier. If Microsoft follows through on all of this with something that makes delivering applications on Windows 2012 identical to ordering up applications through a service provide or the Azure Cloud then Microsoft will be effectively leveraging it biggest asset (the installed base of Windows and Windows applications) to feed its Azure cloud strategy.

Impact on VMware vCloud

Many service providers have already stood up vCloud based clouds in an attempt to steal workloads running on the on-premise instances of VMware vSphere. So right now VMware is ahead of Microsoft in terms the size of the third party cloud ecosystem that has been built. But Microsoft has a substantial existing ecosystem of hosting providers that it fully intends to convert into Windows Azure Cloud partners. So this is an area where Microsoft, if it executes well, can catch up with VMware pretty quickly.

The card that Microsoft has to play here that VMware is completely missing is the installed base of operating systems and applications. The three key pieces here are Active Directory (whose stuff runs where), .NET (the application level API’s) and SQL Server (the database). In contrast, VMware has no Directory Service, it has the start of an application platform in the form of Cloud Foundry (but a long way to go) and no in house database server. If Microsoft can prove that dealing with Windows applications and services at the Windows level (instead of just encapsulating them the way VMware does) has advantages to the customer, then this war will tip towards Microsoft.

Microsoft also has the obvious advantage of having its own cloud, something VMware does not have or offer.

Impact upon Amazon AWS

While Azure now support Linux workloads, and AWS certainly supports Windows workloads, the trend here is clear. If Microsoft succeeds at nothing else, it will make Azure into the preferred Windows cloud. Microsoft will succeed at this at the on premise level for truly private clouds, at the service provider level, and at the public cloud level. The most likely scenario here is that if you are building or buying an application that is going to run on Windows, you are going to (with Microsoft’s help) gravitate towards Azure based clouds.

This puts Amazon in a difficult spot. Amazon has no on premise installations of software to leverage. Amazon could obviously focus upon being the Linux cloud company, but here it will have to content with Red Hat who are certainly going to be looking to steal a page or two from the Microsoft playbook.

Comparison of Vendor Cloud Offerings

VendorAmazon AWSMicrosoft AzureVMware vSphere
& vCloud 
Google Compute
Cloud 
On premise private cloud
Cloud Partner (Service Provider) Program
Vendor Offered Public Cloud

Late Breaking Updates

  1. According to this Wired article, Google is readying its own IaaS cloud offering – allowing users to run “anything”, not just applications restricted to the currently offered Google AppEngine service.  A column has been added for Google to the table above.
  2. According to GigaOm in this post, VMware and EMC are considering spinning out CloudFoundry, some big data assets, and and IaaS offering into a completely independent company. If this is true than the red X in the lower right corner of the table above becomes a green check mark.

Conclusion

The combination of Microsoft’s own Azure cloud, Service Provider offered Azure Clouds, and Azure services in Windows Server 2012 will prove to be a formidable offering in the next generation of the system software wars. Someone once said that it takes Microsoft three tries to get something right. Well here comes Hyper-V 3 along with Windows Server 2012, and a whole bunch of new Azure services.

Plugin by Social Author Bio