IT as a Service

IT as a Service (ITaaS) covers private clouds hybrid clouds and the cloud management offerings used to create and manage these clouds. This includes coverage of Infrastructure as a Service (IaaS) private and hybrid cloud offerings, Platform as a Service (PaaS) private and hybrid cloud offerings, and Software as a Service (SaaS). (Read More)

Emerging areas like Desktop as a Service (DaaS), Storage as a Service, and Applications as a Service are also covered. The key issues covered include which enterprise applications and use cases are appropriate for private and hybrid clouds, and how vendors should select the cloud management offerings that are going to be used to manage these various types of cloud services. Covered vendors include VMware (vCloud Automation Center), VirtuStream, CloudBolt Software, Intigua, ElasticBox, ServiceMesh, Cloudsidekick, and Puppet Labs.

OpenStack and the Software Defined Data Center (SDDC)

OpenStack Logoopenstack-cloud-software-vertical-smallThe OpenStack Summit this week continued to fan the flames of the software-defined data center. The software-defined data center is just a term for replacing traditional data center hardware functionality with the same features implemented in software, running on commodity x86 servers. While software-defined approaches to data center features are at least nominally less expensive than their hardware counterparts the real promise in the approach is flexibility and management ease with high levels of integration. Reconfiguring a network to support the security requirements of a new application is now just a function of  software and APIs. Expanding storage is just simply adding another node with more storage attached, and the cluster compensates automatically. Even things like firewall rules and load balancer configurations can now be stored as templates along with the applications, to be provisioned in minutes. Continue reading OpenStack and the Software Defined Data Center (SDDC)

News: GoGrid and Racemi Start the Cloud Onboarding Arms Race

CloudComputingEvery public cloud vendor looks longingly at all of the virtualized workloads running in VMware based data centers owned by enterprises worldwide, and says, “if only we could migrate those workloads to our cloud”. Of course this dream on the part of all of the public cloud vendors is VMware’s nightmare.  Now GoGrid has announced a partnership with Racemi that allows customers to migrate their workloads from any physical or virtual server platform to the GoGrid cloud. Continue reading News: GoGrid and Racemi Start the Cloud Onboarding Arms Race

SDDC and the Ever Expanding Control Plane

ITasaServiceThe software defined data center has the potential to expand the control plane well outside of anyone’s control by the simple fact that we do not yet have a unified control mechanism for disparate hardware (networking, storage, and compute), for disparate hypervisors (vSphere, KVM, Xen, Hyper-V), new types of hypervisors (storage and networking), and new ideas at managing SDDC at scale. These all end up on the control plane of a software defined data center. In addition, we cross multiple trust zones while in that control plane such as going from user controlled portals to hypervisor management constructs. Add to this the ever increasing number of APIs and we have a very hard to secure environment. Continue reading SDDC and the Ever Expanding Control Plane

Virtualization News for 4/1/2013

April FoolsThe following important events and vendor/organization announcements have occurred today 4/1/2013:

  1. VMware announced the intention to become a hypervisor only company, and announced therefore that they were shedding all non-core assets including the end user/mobile division, the hybrid cloud division, and the management software division.
  2. Intel announced that they will implement the hypervisor in the next generation of their X86 server platform chips, making software hypervisors completely unnecessary. Intel further announced that the Atom chip is its future strategic chip architecture
  3. Dell announced that they have ported the now unnecessary VMware hypervisor to the Atom chipset and will be using this chipset in all future desktop, server, laptop, tablet and phone offerings
  4. Microsoft announced that it is abandoning Windows, will adopt open source Ubuntu as its strategic operating system, will cease all further development on Windows, will port all products and services that used to only run on Windows to Ubuntu and will adopt the open source KVM hypervisor as its future virtualization layer.
  5. EMC announced that “the day of storage virtualization is here”. EMC further announced that it was abandoning its hardware storage business and would now only sell storage virtualization software at a price of $1 per terabyte per year.
  6. The US Federal Government announced that due to its previous investment in legacy and now worthless IT hardware and software assets that it was declaring bankruptcy in order to remove these now worthless assets from its balance sheet. The US government further announced that it would be using Amazon EC2 for all future computing needs.
  7. Amazon announced that due to demand from the government for its services, it would no longer offer commercial customers any kind of an SLA.
  8. Amazon’s commercial customers cheered this move as  recognition of the fact that Amazon’s SLA’s were worthless in the first place.
  9. CA, IBM, and BMC announced that they are finally abandoning their mainframe systems management software businesses to focus entirely upon Intel X86 based systems software just as Intel announced the move from X86 to Atom- ensuring another 20 year legacy systems management software revenue stream for CA, IBM and BMC.
  10. Adobe announced a digital signature program for PDF files ensuring that customers would never have to print a PDF, sign it, scan it and then email it again.
  11. HP announced that it was going to go “back to its roots” and become just a vendor of printers. HP then announced that it was acquiring Adobe so that it could become the market leader in the printing of the PDF files that Adobe just said would never have to be printed again.
  12. Bill Gates fired Steve Ballmer as the CEO of Microsoft and replaced Steve with himself in this role. Gates then decided to focus Microsoft entirely upon improving the primary and secondary educational system in America and told every Microsoft employee to get a job as a teacher or else they would not get paid.
  13. Steve Ballmer decided to start a new professional sports league focused upon the throwing of chairs.
  14. Veeam announced that it has backed up all of the data in the world, making further backups of any other data unnecessary.
  15. Splunk announced that it has indexed all of the data in the world that Veeam backed up and announced that its future business model was a fee of $1 for each query against that database.
  16. Google announced that it now knows everything about every person in the world that it needs to know. Google further announced that it would open source this data store so that no one could accuse Google of “doing evil”.
  17. Paul Maritz’s new company, The Pivotal Initiative announced a point and click application development interface that allows any code monkey anywhere in the world to develop any desired big data application against the respective data stores of Veeam, Splunk and Google in less than one hour.
  18. Cisco announced that it was abandoning the switch hardware and router hardware businesses and would now be only a vendor of software defined switches and routers at a cost of $1 per software switch and router port per year.
  19. New Relic announced that it was changing its name to Byru, an anagram of Ruby which replaces New Relic which is an anagram of Lew Cirne the founder of New Relic. The company stated that this new name was designed to broaden the appeal of the company beyond the initial 36,000 customers who are personal fans of Lew Cirne.
  20. AppDynamics announced that it was changing its name of StaticApps, because it has discovered that moving applications around hurts their response time and performance.
  21. VMTurbo announced that it has exhumed the body of Milton Friedman, put his brain through a CAT scan, and discovered an algorithm that perfectly allocates IT resources to their highest and best uses across all customers and providers in the world based upon global supply and demand curves.
  22. SolarWinds announced that they were changing the name of the company to MoonWinds, because there are no winds on the moon, in the hope of eliminating all of the barriers to the sale of their products.
  23. ManageEngine announced that they were exiting the business of managing computer systems so as to focus fully on the brand equity of the “Engine” in their product name. The new company will be called CarEngine, and will allow you to manage the engine of your car from your smartphone.
  24. AppSense announced that having virtualized the user, that the next frontier was to virtualize the significant others of every user in their installed base. However AppSense discovered that abstracting users from each other did not generate any revenue other than in the case of impending divorces, which turned AppSense into a law firm that advertises on television.
  25. All of the software start-ups in Silicon Valley who did not want to own servers decided to buy coffee makers with Intel X86 processors, creating a “shadow IT” server infrastructure in these software start ups.
  26. IBM, CA and BMC announced a growth strategy of managing these new farms of X86 server based coffee makers.

Summary

April Fools 2013. Nothing in this post is true. If anything in this post becomes true then we are all fools for not foreseeing it.

Client Hypervisors: Intelligent Desktop Virtualization too clever for its own good?

DesktopVirtualizationIn 2011, we asked if  Client Hypervisors will drive will the Next Generation Desktop. Yet, other desktop virtualization industry experts, such as Ron Oglesby, decided the technology was a dead man walking, writing off Type 1 Client  Hypervisors.

Fight? Fight? Fight?

While VMware moved away from client hypervisors, they had to agree that an end user compute device strategy must encompass non-VDI. Their Mirage technology can be considered desktop virtualization, but it is not a client hypervisor. Client hypervisor vendors such as Citrix (who subsumed Virtual Computer’s NxTop) , MokaFive, Parallels, Virtual Bridges and joined by Zirtu. Organisations like  WorldView look to innovate on desktop vitualization through containers rather than full virtualization.

Tablets. Touch Screen capable laptops. Hybrid devices with detachable screens. The netbook might be dead, or they could just be resting. The presence of tablets has undeniably shaken the netbook market but businesses still need powerful, capable laptops.

Bring Your Own Pencil aside – there is still a need to manage “stuff”: still large and small organisations who need to manage the delivery of IT including the end device. The question remains how are devices, and the all important data and applications on them, managed? Hosted and session based desktops have their place – but offline capable device requirements will remain.  Is Intelligent Desktop Virtualization the same as client hypervisors?

Continue reading Client Hypervisors: Intelligent Desktop Virtualization too clever for its own good?

The Big Data Back End for the SDDC Management Stack

SDDC.Management.Stack.Reference.ArchitectureIn, Building a Management Stack for Your Software Defined Data Center, we proposed a reference architecture for how one might assemble the suite of management components that will be needed to manage a Software Defined Data Center (SDDC). In this post we take a look at the need for that management suite to be supported by a multi-vendor big data datastore, and take a look at who might provide such a data store.

The Need for a Multi-Vendor Management Data Store in Your SDDC Management Stack

So why you ask, will a SDDC require a set of management products that will in turn require a multi-vendor big data back end. The reasons are as follows:

  1. The whole point of moving the configuration and management of compute, memory, networking and storage out of the respective hardware and into a software layer abstracted from the hardware is to allow for configuration and management changes to happen both more quickly and more automatically (which means very quickly). Each configuration change or policy change is going to create a blizzard of management data.
  2. If you look at all of the horizontal boxes in our Reference Architecture (below) each one of them along with the vertical IT Automation box will be generating data.
  3. The rate of change in the SDDC will be high enough so as to require fine grained and very frequent monitoring at every layer of the infrastructure.
  4. Just combining the number of layers with the rate of change with the need for fine grained and high frequency monitoring (5 minutes is an eternity) creates a big data problem.
  5. Finally, the need to be able to to cross layer root cause analytics (where in the software or hardware infrastructure is the cause of that application response time problem) means that the root cause analysis process has to cross domains and layers. This in and of itself calls for a common data store across management products.

The Software Defined Data Center Management Stack Reference Architecture

SDDC.Management.Stack.Reference.Architecture.Large

 

Who Could Provide the Multi-Vendor Big Data Repository?

There are two basic criteria for being able to provide such a repository. The first is that you have to have one, or have the intention to build one. The second is that since it is multi-vendor, you have to have the technical capability to, and the business model to partner with the vendors whose products will feed this datastore. The rest of this post is entirely speculative in nature as it is based upon who could do what, not upon who is doing what. To be clear, no vendor listed below has told us anything about what they intend to do in this regard. The rest of this post is based entirely upon what people are shipping today and the author’s speculation as to what might be possible.

Splunk

If there is one vendor who has an early lead in filling this role it would be Splunk. Splunk is in fact the only vendor on the planet from whom you can purchase an on-premise big data datastore, which is today based upon shipping and available products being populated by data from management products from other vendors.  In fact if you go to SplunkBase do searches on things like APM, monitoring, security, and operations you will find a wide variety of Splunk written and vendor written applications that feed data into Splunk. Now it is important to point that today, when a vendor like ExtraHop Networks or AppDynamics feeds their data into Splunk they are not making Splunk in THE back end datastore for their products. They are just feeding a subset and a copy of their data into Splunk. But this is a start, and it puts Splunk further down this road than anyone else. Needless to say, if the vision of the multi-vendor datastore is correct, and Splunk is to become the or one of the vendors who provides this, then Splunk is going to have to entice a considerable number of software vendors to trust Splunk to perform a role that no vendor today trusts any other vendor to perform.

VMware

In, VMware Fleshes Out SDN Strategy with NSX, we went through how VMware is combining Nicira into its Network Virtualization offering, NSX. In the VMware announcement there was a link to a blog post, “Open Source, Open Interfaces, and Open Networking” contained the following fascinating statement:

Statistics Collection & Telemetry

“Another area of focus for an open networking ecosystem should be defining a framework for common storage and query of real time and historical performance data and statistics gathered from all devices and functional blocks participating in the network. This is an area that doesn’t exist today. Similar to Quantum, the framework should provide for vendor specific extensions and plug-ins. For example, a fabric vendor might be able to provide telemetry for fabric link utilization, failure events and the hosts affected, and supply a plug-in for a Tool vendor to query that data and subscribe to network events”.

Needless to say, it is highly unlikely that VMware would choose to make the current datastore for vCenter Operations into the “framework for the common storage and query of real time performance data“. Rather it is much more likely that VMware would build its own big data datastore with the people and the assets that VMware acquired when VMware acquired the Log Insight technology and team from Pattern Insight. VMware therefore clearly has the technology building blocks and the people to pull this off. You could also argue that they would not have make this acquisition if there were not intentions to go at least somewhat in this direction. The key challenge for VMware will then be the multi-vendor part. VMware has no relationship of technical cooperation with any other management software companies other than Puppet Labs, so this is clearly an area where VMware has a long way to go.

New Relic

New Relic is the hands down market leader for monitoring Java, .NET, Ruby, Python, and PHP applications in the cloud. New Relic offers cloud hosted APM as a Service and has gone in four years from a brand new company to now having more organizations using its product than the rest of the APM industry combined. New Relic recently raised $75M from top tier investors and is rumored to be positioning itself for an IPO in the 2014-2015 timeframe. New Relic already makes its data available to third parties in its partner program via a REST API. It is not much of a stretch for New Relic to consider becoming the management platform in the cloud, partnering with adjacent vendors and becoming a vendor of the multi-vendor cloud datastore. Again all of this is pure speculation at this point.

The Pivotal Initiative

The Pivotal Initiative is a new company formed with assets and people from EMC and VMware lead by former VMware CEO Paul Maritz. These assets consist of the application platform PaaS products from VMware (Gemfire, Spring, vFabric and CloudFoundry), and the big data assets fromEMC (GreenPlum). The stated ambition is to deliver a way to build and deploy big data applications that is radically better than the incumbent method and tackle giants like IBM, Microsoft, and Oracle in the process. This means that the focus of both the application development assets and the big data assets is most likely to be upon solving business problems for customers, not IT management problems for customers. However, it would not be inconceivable for a third party company to license these technologies from Pivotal and build an offering targeting the multi-vendor management stack use case.

CloudPhysics

Consider the possibility that this multi-vendor big data datastore is in fact non on-premise, but in the cloud. If you are willing to consider that possibility, then it is not much of a stretch to consider that CloudPhysics a vendor with cloud hosted (delivered as a service) operations management solutions might step into this fray. One of the key reasons that CloudPhysics may be able to provide something of extraordinary value is that the company has a strategy of applying Google quality analytics to Google size data sets. The analytics come from a world class team of people some of whom previously worked at Google. The data today is collected by virtual appliances installed at CloudPhysic’s customer sites (in their respective VMware environments). If CloudPhysics is already collecting data across customers and putting it in its cloud, it is not too huge a stretch to consider the possibility that other vendors who also deliver their value as a service could partner up with CloudPhysics, combine their respective sets of data, and produce a 1+1=3 scenario for joint customers.

AppNeta

AppNeta is today a market leading vendor of a cloud hosted service, PathView Cloud, that measures the performance of the wide area network in between the users and branch offices of an enterprise and the enterprises back end data center. The back end is a true big data back end, built around true big data technologies. AppNeta is branching out into APM with its TraceView offering.  But network performance data and application performance data are just parts of the compete set of data that will get generated by the SDDC and about the SDDC by various management products. AppNeta does not today have a partner program to attract third party data to its management data cloud, but who knows what the future holds.

Boundary

Boundary is an APM vendor with a cloud hosted big data back end that today focuses upon collecting the statistics from the network layer of the operating system that support applications running in clouds. If you think of New Relic as the vendor who is monitoring your application in the cloud, you can think of Boundary as the vendor who should be monitoring the interaction of your operation system underlying your application with the cloud.  Boundary has no partner program today, and no ability to add third party vendor data to its cloud datastore today, but again who knows what the future might hold.

Summary

The SDDC and the Cloud are going to require a new SDDC Management Stack that will need to be based upon a multi-vendor big data datastore. There will likely be on-premise and cloud hosted version of these datastores. Splunk, VMware, New Relic, The Pivotal Initiative, CloudPhysics, AppNeta, and Boundary are all excellent hypothetical suppliers of such a datastore.