One aspect of SDDC that does not get a lot of attention is Data Protection, instead we are concentrating on SDN and automation. Yet, this leads me to Data Protection. There is a clear marriage between Data Protection and SDDC that needs to be added to any architecture. As with all things, we start with the architecture. Our SDDC architecture should also include data protection, but what data are we really protecting? Within SDDC there are three forms of data: tenant, configuration, and automation. Without one or the other, we may not be able to reload our SDDC during a disaster. What is required to get these three types of data, what really are these types of data? and how can we add data protection into SDDC cleanly? Continue reading SDDC Data Protection
In, Building a Management Stack for Your Software Defined Data Center, we proposed a reference architecture for how one might assemble the suite of management components that will be needed to manage a Software Defined Data Center (SDDC). In this post we take a look at the Operations Management portion of the reference architecture and the vendors that can provide this functionality.
The Need for New Operations Management Vendor in Your SDDC Management Stack
So why you ask, will a SDDC require a new approach to Operations Management and therefore more than likely a new vendor for Operations Management? The reasons are driven by the fact that managing the operations of an SDDC will be dramatically different from managing a static and physical data center in the following respects:
- Legacy Operations Management products were built to the assumptions of servers dedicated to single applications, networks implemented solely in hardware, and usually a dedicated path from the database server to the storage array. A SDDC is based upon shared servers, networks implemented in both hardware and software, and potentially a shared and multiplexed path to the storage array.
- Legacy Operations Management solutions were built to assume systems that changed relatively infrequently. The SDDC is built to support private clouds and IT as a Service. The whole point of both private clouds and IT as a Service are to fully automate the process by which IT services are provisioned for end users. This means that the configuration and resource allocation in an SDDC will change whenever users want it to, since users will be provisioning workloads whenever they need to.
- For the above two reasons, you cannot just add VMware vSphere as a data source to a legacy Operations Management solution and expect to have something useful. Operations Management for a SDDC means getting different data, getting more of it, getting it more frequently, and doing different things with it than were done in the legacy physical case.
- For example the whole notion of resource contention caused by N workloads running on one server simply does not exist in the physical world. Neither does the notion that new workloads are going to show up on a server in an automated matter at the discretion of a business constituent of the IT department.
- The SDDC is going to be concerned with the configuration and operation of all of the CPU, memory, networking and storage resources underlying the SDDC. In the legacy world there were completely separate products for managing servers, for managing switches, and for managing storage arrays. The management of all four of these key resources will need to be combined into one Operations Management solution for the SDDC.
The Software Defined Data Center Management Stack Reference Architecture
Key Criteria for an SDDC Operations Management Solution
Since there are many vendors selling Operations Management products into the VMware and Hyper-V virtualization markets today, the most important thing to do is to evaluate these vendors on their future ability to expand their product scopes to include support for the SDDC. That would include the following key capabilities:
- Just about every Operations Management vendor supports more than one hypervisor today. At this point support for at least VMware vSphere and Microsoft Hyper-V ought to be assumed as table stakes. Even if you are a 100% VMware vSphere shop today, you should at least get a statement of commitment for support of Microsoft Hyper-V and Red Hat KVM from your Operations Management vendor. This is because there is nothing wrong with having more than one hypervisor. However building a management stack as depicted above that is different for each of two or three hypervisors would re-create the management mess that characterizes Operations Management in the physical world for most enterprises.
- The ability to handle the scale and scope of your environment. This requirements produces drastically different results depending upon the size of your environment, the diversity of the hardware in your environment, and the nature of the workloads in your environment. At the low end (100 physical hosts), the idea is to end up with one simple to implement product that collects data from the standard management interfaces available at each layer of the SDDC and does appropriate analysis and presentation of that data. At the high end (5,000 to 10,000 hosts) commodity data is going to equal commodity results. You will want to invest in an Operations Management solution from a vendor that understands and has the ability collect unique and valuable data with their own R&D efforts.
- Today’s Operations Management solutions focus primarily upon the management of physical and virtual servers. Little attention is paid to the virtual network that exists today in the form of the vSwitch and they only attention that most vendors pay to storage is to consume the storage metrics that VMware makes available in the vSphere API. This will have to dramatically change. Managing the virtual network layer and the virtual storage layer will be much more demanding for Operations Management vendors than managing CPU and memory contention.
- Today, relatively few of VMware’s customers have fully implemented private cloud or IT as a Service environments. The point of the SDDC is to support the creation of these environments. So Operations Management solutions are going to have to significantly change to provide the level of management needed for large scale and dynamic systems.
- The combination of having to manage CPU, memory, networking and storage, with having to manage a large scale environment, with being able to cope with the constant changes driving by the automation in private clouds supporting IT as a Service will require different Operations Management solutions than those that we have today.
Who Could Provide Operations Management for the SDDC?
First let’s make a very important point. Since the SDDC does not exist yet, no one has an Operations Management product for an SDDC today. We have wait for VMware to deliver upon the recently announced NSX network virtualization components, and deliver on the rumored but not yet announced storage virtualization projects. Given how things are unfolding, and have unfolded in the past, there are good reasons to hope that further announcements and delivery dates will be provided at VMworld this fall.
Given that no Operations Management product for an SDDC exists today, what we are left with is the ability to engage in informed speculation as to who might deliver such a Operations Management solution. Note that this is 100% speculation based upon an analysis of each vendor’s strategy in the Operations Management space today.
VMware is a leader in the Operations Management business for virtualized data centers today with its vCenter Operations product. Since VMware is the only vendor on the planet who has announced the intention to build and deliver an SDDC, it is a reasonable assumption that VMware will evolve vCenter Operations to be able to manage its own SDDC. Despite the fact that it seems obvious that VMware would go down this path, there are tremendous challenges for VMware as it expands the scope of vCenter Operations in this manner. Some of these challenges were outlined in our Big Data for for the SDDC post. Basically VMware has to start by ripping the existing data store out of vCenter Operations and replacing it with something most likely built by the Log Insights team from Pattern Insights. Next VMware has to add the relevant metrics at the relevant level of frequency for the virtual networking and virtual storage layers. This is going to require the new big data back end since there will be so many new metrics arriving at such a rate that the existing data store would have no change of keeping up. Finally, the analytics in vCenter Operations will have to go through a significant evolution to deal with this new torrent of data and to be able to provide effective cross-domain root cause analysis. VMware likely understands each of these challenges very well. However VMware is unlikely to address all of them across the diversity of its own customer base, leaving plenty of room for third party vendors.
If you are looking to throw our your legacy physical Operations Management solution and replace it with something that is built from the ground up for the SDDC and the private cloud and IT as a Services use cases for the SDDC, then Zenoss would be a good place to start. Operations Management starts with the ability to manage events and mange the impact of events upon the availability of the physical and virtual environment. Zenoss has a completely modern event management system and if your environment is of the scale and diversity that event management is needed then Zenoss is a great place to start.
One of the key points behind building and using and SDDC will be that it will be possible to automate many things that are not or cannot be automated today. VMTurbo uniquely solves the problem of fully automating the process by which the important workloads in your environment are assured that they get the resources that they need to meet their SLA’s. VMTurbo does this by allowing you to prioritize your workloads, and then by using the virtual CPU, virtual memory, network I/O control and storage I/O control interfaces in vSphere to ensure that the highest priority workloads get the resources that they need. This is precisely the kind of approach that will be essential to the smooth operation of the SDDC as there will be no way for humans to keep up with resource allocation decisions as private clouds and IT as a Service gets deployed in your SDDC.
Like VMTurbo, Cirba comes at the Operations Management problem for the SDDC with a heavy dose of analytics. However, the focus of Cirba is more upon making sure that the physical capacity of the infrastructure underlying the environment is properly utilized and allocated. This will prove to be an essential capability for the management of the SDDC as all of the automation in the world will end up being useless if the underlying physical capacity across the four key resources areas does not exist or is not properly allocated. Conversely, the tendency to over-provision in the name of reducing risk is likely to be just as strong for the SDDC as it has been historically for physical environments, making Cirba into something that is an essential cost management tool.
The Quest Software Division of Dell
When Quest Software bought vKernel, two market leading products were brought together under one roof. One was the vFoglight product from Quest. The other was the vOperations product from vKernel. These products have now been combined into the Quest vOPS product line. This product line is unique in that it retains the two key aspects of the parent products. On the low end the product is extremely easy to try, implement and purchase (a legacy of vKernel). At the high end (a legacy of vFoglight and the rest of the Foglight product line), the product is a fully enterprise capable solution that can be combined with numerous other Quest offerings to solve complex end-to-end and cross stack Operations Management issues.
Reflex Systems is unique in the Operations Management space in that the company long ago decided to architect its solution for very large environments and for large amounts of rapidly arriving data. Reflex Systems is one of the few Operations Management vendors that can collect the operations and configuration data in a VMware environment directly from each vSphere host every 15 seconds, as opposed to waiting for the 5 minute roll-up of that data from the vSphere API. The ability to do this for the largest of VMware’s customers, supported by the analytics required to analyse this data and a user interface capable of making sense of the quantity of the data and the scale of its source makes Reflex Systems into a unique Operations Management vendor today. The foundations upon which the Reflex Systems product are built positions the company extremely well for Operations Management of the forthcoming SDDC.
What if the right way to approach the problem of collecting the Operations Management data for an SDDC and then analyzing that data is to use the approach that Google took to collecting data and analyzing it for its own data center. If you are willing to consider that possibility, then consider CloudPhysics, a vendor with cloud hosted (delivered as a service) operations management solution. One of the key reasons that CloudPhysics may be able to provide something of extraordinary value is that the company has a strategy of applying Google quality analytics to Google size data sets. The analytics come from a world class team of people some of whom previously worked at Google. The data today is collected by virtual appliances installed at CloudPhysic’s customer sites (in their respective VMware environments). This puts CloudPhysics in the unique position of being able to do analytics across the operations management data from many customers, which will likely result in features and benefits simply not possible from on-premise solutions.
Splunk is in fact the only vendor on the planet from whom you can purchase an on-premise big data datastore, which is today being populated not just by various logs, but by virtue of the Splunk App for VMware and the Splunk Apps for Citrix true Operations Management data for these environments. In fact if you go to SplunkBase do s search on “virtual” you will find 11 different operations management applications feeding data into Splunk. Splunk has a strategy of being the management data platform across operations management, application performance management and security and certainly bears watching as it evolves its strategy and product offerings in the direction of the SDDC.
If your current virtualization environment or your future SDDC spans more than one hypervisor, but your primary environment is VMware, then you really need to consider Hotlink. Holink offers something different than any other Operations Management solution profiled here. Hotlink lets your VMware administrators administer Hyper-V, KVM and Amazon EC2 environments from within the vCenter management console in the exact same manner as they manage a vSphere environment. This give rise to a new meaning for cross platform. In Hotlink’s world cross platform is not just that an Operations Management or Cloud Management solution works across two or more hypervisors. In Hotlink’s world cross-platform means that you can use one management console (vCenter) to manage all of these environments, migrate workloads across these environments, and leverage your vSphere management conventions (like snapshots) across all of these environments.
Count how many management agents of various types (operations management, application performance management, security, backup, etc.) you have deployed in each virtual machine in your environment. Now multiply that by the number of VM’s in your environment. If the thought of having to manage and update all of those agents (and preventing their misbehavior from affecting your environment) gives you a headache, then Intigua is for you. Intigua applies application virtualization techniques to the management agents in your virtualized server environment (think App-V for management agents on servers). This makes it much easier to manage the agents in your environment and allows you set policies that prevent those agents from harming your environment.
ManageEngine and SolarWinds
If all of the above sounds too complex and too expensive for you because your environment is just not that large and just not that complex then you need to focus upon solutions that just rely upon the management data available from standard management API’s (the vSphere API, WMI, SNMP, SMIS, etc.) and that are easy to evaluate, easy to implement, and easy and affordable to purchase. If you consider yourself to be an SMB or an SME then the products from ManageEngine and SolarWinds are for you. The objective here is how quickly do these products deliver value to you and how little manual configuration work do you have to do to get that value. Most importantly, how little on-going maintenance work are you going to have to do to keep your environment up and running.
The SDDC is going to require a new approach to Operations Management. Vendors with effective Operations Management solutions for today’s virtualized data centers are in the best position to be able to expand their offerings for the SDDC. Legacy vendors face a complete rewrite of their products and the adoption of a new business model (easy to try and easy to buy) that will destroy them financially, and will therefore be unable to react to the SDDC either technically or financially.
Which cloud service will be king of the cloud? Cloud computing has taken off in functionality and practicality over the last few years, so that now we have three fully defined service models of cloud computing:
- Infrastructure as a service (IaaS)
- Platform as a service (PaaS)
- Software as a service (SaaS) Continue reading King of the Cloud
In 2011, we asked if Client Hypervisors will drive will the Next Generation Desktop. Yet, other desktop virtualization industry experts, such as Ron Oglesby, decided the technology was a dead man walking, writing off Type 1 Client Hypervisors.
Fight? Fight? Fight?
While VMware moved away from client hypervisors, they had to agree that an end user compute device strategy must encompass non-VDI. Their Mirage technology can be considered desktop virtualization, but it is not a client hypervisor. Client hypervisor vendors such as Citrix (who subsumed Virtual Computer’s NxTop) , MokaFive, Parallels, Virtual Bridges and joined by Zirtu. Organisations like WorldView look to innovate on desktop vitualization through containers rather than full virtualization.
Tablets. Touch Screen capable laptops. Hybrid devices with detachable screens. The netbook might be dead, or they could just be resting. The presence of tablets has undeniably shaken the netbook market but businesses still need powerful, capable laptops.
Bring Your Own Pencil aside – there is still a need to manage “stuff”: still large and small organisations who need to manage the delivery of IT including the end device. The question remains how are devices, and the all important data and applications on them, managed? Hosted and session based desktops have their place – but offline capable device requirements will remain. Is Intelligent Desktop Virtualization the same as client hypervisors?
In, Building a Management Stack for Your Software Defined Data Center, we proposed a reference architecture for how one might assemble the suite of management components that will be needed to manage a Software Defined Data Center (SDDC). In this post we take a look at the need for that management suite to be supported by a multi-vendor big data datastore, and take a look at who might provide such a data store.
The Need for a Multi-Vendor Management Data Store in Your SDDC Management Stack
So why you ask, will a SDDC require a set of management products that will in turn require a multi-vendor big data back end. The reasons are as follows:
- The whole point of moving the configuration and management of compute, memory, networking and storage out of the respective hardware and into a software layer abstracted from the hardware is to allow for configuration and management changes to happen both more quickly and more automatically (which means very quickly). Each configuration change or policy change is going to create a blizzard of management data.
- If you look at all of the horizontal boxes in our Reference Architecture (below) each one of them along with the vertical IT Automation box will be generating data.
- The rate of change in the SDDC will be high enough so as to require fine grained and very frequent monitoring at every layer of the infrastructure.
- Just combining the number of layers with the rate of change with the need for fine grained and high frequency monitoring (5 minutes is an eternity) creates a big data problem.
- Finally, the need to be able to to cross layer root cause analytics (where in the software or hardware infrastructure is the cause of that application response time problem) means that the root cause analysis process has to cross domains and layers. This in and of itself calls for a common data store across management products.
The Software Defined Data Center Management Stack Reference Architecture
Who Could Provide the Multi-Vendor Big Data Repository?
There are two basic criteria for being able to provide such a repository. The first is that you have to have one, or have the intention to build one. The second is that since it is multi-vendor, you have to have the technical capability to, and the business model to partner with the vendors whose products will feed this datastore. The rest of this post is entirely speculative in nature as it is based upon who could do what, not upon who is doing what. To be clear, no vendor listed below has told us anything about what they intend to do in this regard. The rest of this post is based entirely upon what people are shipping today and the author’s speculation as to what might be possible.
If there is one vendor who has an early lead in filling this role it would be Splunk. Splunk is in fact the only vendor on the planet from whom you can purchase an on-premise big data datastore, which is today based upon shipping and available products being populated by data from management products from other vendors. In fact if you go to SplunkBase do searches on things like APM, monitoring, security, and operations you will find a wide variety of Splunk written and vendor written applications that feed data into Splunk. Now it is important to point that today, when a vendor like ExtraHop Networks or AppDynamics feeds their data into Splunk they are not making Splunk in THE back end datastore for their products. They are just feeding a subset and a copy of their data into Splunk. But this is a start, and it puts Splunk further down this road than anyone else. Needless to say, if the vision of the multi-vendor datastore is correct, and Splunk is to become the or one of the vendors who provides this, then Splunk is going to have to entice a considerable number of software vendors to trust Splunk to perform a role that no vendor today trusts any other vendor to perform.
In, VMware Fleshes Out SDN Strategy with NSX, we went through how VMware is combining Nicira into its Network Virtualization offering, NSX. In the VMware announcement there was a link to a blog post, “Open Source, Open Interfaces, and Open Networking” contained the following fascinating statement:
“Another area of focus for an open networking ecosystem should be defining a framework for common storage and query of real time and historical performance data and statistics gathered from all devices and functional blocks participating in the network. This is an area that doesn’t exist today. Similar to Quantum, the framework should provide for vendor specific extensions and plug-ins. For example, a fabric vendor might be able to provide telemetry for fabric link utilization, failure events and the hosts affected, and supply a plug-in for a Tool vendor to query that data and subscribe to network events”.
Needless to say, it is highly unlikely that VMware would choose to make the current datastore for vCenter Operations into the “framework for the common storage and query of real time performance data“. Rather it is much more likely that VMware would build its own big data datastore with the people and the assets that VMware acquired when VMware acquired the Log Insight technology and team from Pattern Insight. VMware therefore clearly has the technology building blocks and the people to pull this off. You could also argue that they would not have make this acquisition if there were not intentions to go at least somewhat in this direction. The key challenge for VMware will then be the multi-vendor part. VMware has no relationship of technical cooperation with any other management software companies other than Puppet Labs, so this is clearly an area where VMware has a long way to go.
New Relic is the hands down market leader for monitoring Java, .NET, Ruby, Python, and PHP applications in the cloud. New Relic offers cloud hosted APM as a Service and has gone in four years from a brand new company to now having more organizations using its product than the rest of the APM industry combined. New Relic recently raised $75M from top tier investors and is rumored to be positioning itself for an IPO in the 2014-2015 timeframe. New Relic already makes its data available to third parties in its partner program via a REST API. It is not much of a stretch for New Relic to consider becoming the management platform in the cloud, partnering with adjacent vendors and becoming a vendor of the multi-vendor cloud datastore. Again all of this is pure speculation at this point.
The Pivotal Initiative
The Pivotal Initiative is a new company formed with assets and people from EMC and VMware lead by former VMware CEO Paul Maritz. These assets consist of the application platform PaaS products from VMware (Gemfire, Spring, vFabric and CloudFoundry), and the big data assets fromEMC (GreenPlum). The stated ambition is to deliver a way to build and deploy big data applications that is radically better than the incumbent method and tackle giants like IBM, Microsoft, and Oracle in the process. This means that the focus of both the application development assets and the big data assets is most likely to be upon solving business problems for customers, not IT management problems for customers. However, it would not be inconceivable for a third party company to license these technologies from Pivotal and build an offering targeting the multi-vendor management stack use case.
Consider the possibility that this multi-vendor big data datastore is in fact non on-premise, but in the cloud. If you are willing to consider that possibility, then it is not much of a stretch to consider that CloudPhysics a vendor with cloud hosted (delivered as a service) operations management solutions might step into this fray. One of the key reasons that CloudPhysics may be able to provide something of extraordinary value is that the company has a strategy of applying Google quality analytics to Google size data sets. The analytics come from a world class team of people some of whom previously worked at Google. The data today is collected by virtual appliances installed at CloudPhysic’s customer sites (in their respective VMware environments). If CloudPhysics is already collecting data across customers and putting it in its cloud, it is not too huge a stretch to consider the possibility that other vendors who also deliver their value as a service could partner up with CloudPhysics, combine their respective sets of data, and produce a 1+1=3 scenario for joint customers.
AppNeta is today a market leading vendor of a cloud hosted service, PathView Cloud, that measures the performance of the wide area network in between the users and branch offices of an enterprise and the enterprises back end data center. The back end is a true big data back end, built around true big data technologies. AppNeta is branching out into APM with its TraceView offering. But network performance data and application performance data are just parts of the compete set of data that will get generated by the SDDC and about the SDDC by various management products. AppNeta does not today have a partner program to attract third party data to its management data cloud, but who knows what the future holds.
Boundary is an APM vendor with a cloud hosted big data back end that today focuses upon collecting the statistics from the network layer of the operating system that support applications running in clouds. If you think of New Relic as the vendor who is monitoring your application in the cloud, you can think of Boundary as the vendor who should be monitoring the interaction of your operation system underlying your application with the cloud. Boundary has no partner program today, and no ability to add third party vendor data to its cloud datastore today, but again who knows what the future might hold.
The SDDC and the Cloud are going to require a new SDDC Management Stack that will need to be based upon a multi-vendor big data datastore. There will likely be on-premise and cloud hosted version of these datastores. Splunk, VMware, New Relic, The Pivotal Initiative, CloudPhysics, AppNeta, and Boundary are all excellent hypothetical suppliers of such a datastore.
Soon the backup power will be available for our new datacenter and the redesign to make use of VMware vCloud Suite is nearing completion. Soon, our full private cloud will be ready for our existing workloads. These workloads however now run within a XenServer based public cloud. So the question is, do we stay in a poorly performing public cloud (mentioned in our Public Cloud Reality series) or move back to our own private cloud? As the Clash put it “Should I Stay or Should I Go Now.” Continue reading Public Cloud Reality: Do we Stay or Do We Go?