VMware will be the first vendor to deliver a Software Defined Data Center and a management stack for that data center. But if you want to, you can choose to build your own management stack for your SDDC out of products from vendors that specialize in one of the areas in our reference architecture. This would result in a Best of Breed SDDC Management Stack.

The Best of Breed SDDC Management Stack

VMware is likely to announce its SDDC at VMworld. VMware has already announced NSX, its network visualization product, but has not yet announced how that product is going to be priced and packaged. VMware has been hinting at a storage visualization offering for quite some time, but again, technical details, packaging details, and pricing details are not available.

What we know is that we should expect VMware to announce a fully Software Defined Data Center offering at VMworld this year, and deliver it by the start of 2014. VMware, being a company run by very smart and very experience people, would not announce a platform without a strategy to manage that platform. We reviewed VMware’s strategy to manage their own SDDC platform in “VMware’s SDDC Management Stack“.

However, you have the right to construct the management stack that you desire for your SDDC. The key issues when you make this decision are:

  • Should you trust the platform vendor (VMware or Microsoft) to do the best job of managing their own platform?
  • Can you trust any platform vendor to manage the platform of any other platform effectively? In other words, can you trust VMware to manage Hyper-V, or can you trust Microsoft to manage vSphere?
  • Can you trust the fox to watch the henhouse? In other words, if you are trying to make sure that the platform does not affect the applications, can you trust the vendor of the platform to tell you the truth about how their platform is affecting the applications?
  • What is the tradeoff between having “one throat to choke” across the management stack and the platform vs. having multiple different points of view that you can reconcile at your discretion?

The Best of Breed SDDC Management Stack (Click To Expand) 

Best.of.Breed.SDDC.Management.Stack

*Best of Breed Solutions for SDDC Security and Data Protection will be covered in separate posts.

First lets reiterate why a new management stack is necessary and desirable for the Software Defined Data Center:

  • In the SDDC, the configuration for compute, networking, and storage will be done in the SDDC software (vSphere, NSX, and however VMware decides to package storage virtualization).
  • Since all configuration will be centralized in one place, it will be easy for configurations to follow workloads around as they migrate between hosts, clusters, and clouds.
  • But as these migrations occur, configuration changes will occur along with them – creating a constant stream of configuration events that must be processed by the management stack for the SDDC.
  • As workloads move around, resources will get reallocated, creating the need to manage performance and capacity on a continuous, real time, and deterministic basis
  • Automation through cloud management and orchestration will create further streams of information as new workloads get provisioned and old ones get retired.
  • The blizzard of configuration events and automation events plus the stream of performance information will create the need for a real-time back end data store for all of this data.
  • Any product that manages any layer or function in the SDDC will need to have access to the data created by other products. This will again create the need for a big data back end for the SDDC management stack
  • The management stacks in use in enterprises today do not use common big data back ends across products. Each product tends to have its own database and searching across them is nearly impossible.
  • Therefore the need to have a common back end that can cope with the high arrival rate of large quantities of data creates the need for an entirely new management stack.
  • The need for this new stack is further bolstered by the fact that the SDDC will be more automated, more dynamic, more distributed, and more shared than even existing virtualized data centers. Existing management solutions were not built for highly automated, dynamic, distributed, and shared environments; therefore, the existing management stack will need to be removed and replaced by a new one that is built for the SDDC.

The Components of a Best of Breed SDDC Management Stack

Assuming that VMware will announce its SDDC at VMworld this year, and deliver it by the start of 2014, it is time to start planning how to manage the SDDC. If you are of the best of breed point of view here is the approach that we would recommend (we will cover Security and Data Protection in separate posts):

  • Operations Management – Operations management covers a broad waterfront. At the minimum, you need resource-based management of performance and capacity, along with cross-correlation of how changes in configuration are impacting that performance and capacity. Cirba, CloudPhysics, Dell (vFoglight), HP (VPV),  VMturbo, and Zenoss can all do a fine job of this for you. If you combine Splunk and the Splunk App for VMware, you cover both the big data repository and a good chunk of your operations management offerings. Hotlink uniquely allows you to manage Hyper-V, KVM, XenServer, and Amazon from within vCenter and is therefore an absolutely indispensable tool for managing the complexity of multi-hypervisor and multi-cloud environments. Intigua virtualizes all of the management agents in your environment, allowing you to automate and control that layer of your software infrastructure.
  • Infrastructure Performance Management – In our post on SDDC Infrastructure Performance Management, we quoted Bruce Davie one of the architects of VMware NSX on the need for real time instrumentation. Bruce’s original blog post is here – “Open Source, Open Interfaces, and Open Networking“. The point is that the rate of change in the SDDC will require fine-grained instrumentation of the performance and capacity of the SDDC. This will require collecting new metrics and collecting them much more frequently than every 20 seconds. ExtraHop Networks decodes the popular NAS protocols so that you get granular and real time visibility into how network latency for NAS storage is affecting your applications. Virtual Instruments is the only vendor that can provide you with a real time, deterministic, and comprehensive view of the latency of your SAN attached storage. GigaMon and Xangati both give you detailed views into how the network is affecting the performance of your environment.
  • Application Performance Management – If you are going to run applications you care about in a SDDC or a cloud, then you had better instrument every one of these applications for response time and throughput. You will not be able to infer the performance of your applications from resource utilization statistics – you will need to measure end-to-end and hop-by-hop response time directly. There are two broad classes of APM tools. DevOps-focused tools like AppDynamics, AppNeta (TraceView), Compuware, New Relic, and Riverbed focus upon helping developers who support custom-developed applications quickly find code issues in production. AppOps tools like AppEnsure, AppFirst, BlueStripe, Boundary, Correlsense, ExtraHop, and INETCO help operations teams support every application (purchased and custom-developed) in production by finding what in the infrastructure is impacting the application.
  • Cloud Management – Cloud management is the layer of management software that allows you to put services in a service catalog and then automate the provisioning of those services. CloudBolt Software, Embotics, FluidOps, ServiceMesh, and Virtustream all allow you to provision infrastructure as a service. If you want to stand up your own PaaS cloud or place actual applications in the service catalog, you need to carefully evaluate your cloud management candidates for this capability.
  • Big Data Repository – This is where Splunk is making its mark as a leader in the new management software industry. By solving the log management problem, addressing many of the needs in the Operations Management space with the Splunk App for VMware, and by partnering with many other vendors who can put their data into the Splunk back end, Splunk is currently the first and only credible vendor of an open multi-vendor big data back end data store for the entire management stack. If you are going to make a decision in this area, the most important criteria in that decision should be the depth and breadth of the third party partner products who are using that back end data store. Splunk has a huge lead over every one else on this front. The wild card here is CloudPhysics, which has a big data back end – in the cloud.
  • Self-Learning Analytics – Netuitive has been the functionality and market leader in the business of self-learning performance analytics for quite some time. Netuitive has quite an impressive list of very large customers, but has not been able to translate that success with a relatively small number of very large customers into a broader scale business. Prelert is trying to accomplish precisely what Netuitive has not been able to accomplish by partnering with Splunk and working aggressively with the Splunk VAR channel. CloudPhysics is also investing quite heavily in their analytics platform and may emerge as a SaaS offering that addresses the big data back end, operations management, and the analytics requirements for SDDC management.
  • Orchestration and Automation – Cloud Management vendors can provision virtual machines. But if you want to populate a VM with a web server, a Java server, a database server or an actual application, you should automate the installation of those bits and put in place a process to ensure consistency over time. This is where vendors like Puppet, Chef, and Cloud Sidekick come into play. The Puppet partnership with VMware and the investment on the part of VMware in Puppet is extremely significant on this front.

Summary

VMware will be the first vendor to deliver a SDDC. VMware’s SDDC Management Stack will be the first cohesive and complete management stack for that SDDC. But if you want to, you can construct your own management stack for your SDDC out of best of breed components. Here’s how.

If  you are attending VMworld and you would like to learn more about this topic, then consider attending VCM4869 – Building the Management Stack for your Software Defined Data Center.

Share this Article:

Share Button
Bernd Harzog (335 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


Related Posts:

2 comments for “Best of Breed SDDC Management Stack

  1. Disgruntled
    July 9, 2013 at 1:25 AM

    Guys! Seriously?? Splunk as a Big Data repo? As long as their model for attaining results requires all nodes to be up all the time and their third tier of store (the ‘cold store’), has no meta cache in front of it (therefore requiring continual FS level operations to run on the files therein), they are not a credible store.

    The operational overhead is just too much. Keep a few weeks of data for near term analysis, yes. But ‘Big Data’?? No. Go and use some form of MapReduce like the rest of the world.

  2. Bharzog
    July 9, 2013 at 9:44 AM

    Hello Disgruntled,

    Hopefully it was just this post that made you disgruntled, and you are not disgruntled all of the time!

    I agree with and understand the limitations that you mentioned with Splunk’s current approach. By the way, I think Splunk understands these issues as well, which is why they have announced Hunk, which layers their query engine, analytics, and U/I on top of Hadoop.

    Best Regards,

    Bernd Harzog

Leave a Reply

Your email address will not be published. Required fields are marked *


6 × = six