The Virtualization Practice

Cloud Computing

Cloud Computing focuses upon how to construct, secure, manage, monitor and use public IaaS, PaaS, and SaaS clouds. Major areas of focus include barriers to cloud adoption, progress on the part of cloud vendors in removing those barriers, where the line of responsibility is drawn between the cloud vendor and the customer for each of IaaS, PaaS and SaaS clouds, ...
as well as the management tools that are essential to deploy in the cloud, ensure security in the cloud and ensure the performance of applications running in the cloud. Covered vendors include Amazon, VMware, AFORE, CloudSidekick, CloudPhysics, ElasticBox, Hotlink, New Relic, Prelert, Puppet Labs and Virtustream.

VMworld Pilgrimage Part 2

In my Preparing for the VMworld Pilgrimage post last week, I went over some things, namely hotel and airfare, which you should have confirmed by now if you are planning on attending VMworld 2010 in San Francisco. I have heard through the grapevine that there are going to be around 15,000 people in attendance this year so it is shaping to be another great event. This post is going with the assumption that your travel, logging, sessions and labs have been booked and taken care of. With that said, what is the best way to stay current and get the most out of the week? I would like to present the thought that the VMTN Community Lounge / Blogger Area is a good place to start. If you are looking to meet some of the most active individuals in virtualization, this will be a place that you should consider checking in periodically throughout the week.

There are some applications that are “never” going to go into a public cloud and the monitoring of those applications is not going to be done on a MaaS basis either. However, the ease with which these solutions can be purchased, initially deployed and then managed on an ongoing basis means that for applications that fit into a public cloud deployment scenario (you can live with the security and performance issues of the public cloud), MaaS is a very viable option for the monitoring of these applications and may represent the future of monitoring just as Cloud Computing may represent the future of computing.

VMworld is clearly the largest dedicated virtualization conference, and yet from an Open Source perspective it is slightly disappointing because the VMware ecosystem naturally attracts proprietary software vendors, and also some of the more interesting activities in Open Source are through multi-vendor foundations which do not have the same marketing budgets as vendors themselves.

Nevertheless, there are a number of key Open Source players, and some interesting smaller players, represented at VMworld.

The countdown is on for one of the biggest virtualization conferences of the year, VMworld 2010 in San Francisco. I have been lucky enough to be able to attend all the VMworld conferences from 2005 on and the 2009 VMworld Europe in Cannes, France. These shows are pretty big and jam packed full of people, exhibits and sessions. Good old fun for the entire family!! Well not necessarily shared fun for the entire family, but if you have a passion for virtualization, then VMworld 2010 in San Francisco is the place you should be. Since it is San Francisco, you can even bring your entire family and your family can enjoy the Spouse Activities while you enjoy the talk about virtualization.

The combination of Quest, Vizioncore and Surgient creates a company that for the first time has all of the management pieces required for an enterprise to be able to virtualize tier one applications and to automate the process of assuring service levels for these applications. This puts Quest in position to be a clear leader in the virtualization management market.

Cloud.com had lined itself up with Citrix by using only XenServer in the commercially-licensed version of its IaaS product, and now is being used by Citrix to ensure OpenStack supports XenServer (which it doesn’t at the moment), presumably to keep Red Hat’s KVM under control and VMware out. We’ve also been trawling through the available OpenStack documentation to understand why NASA thinks its cloud is more scalable than Eucalyptus. It seems to be all to do with how the state information is passed amongst the various servers that make up the system. GPL-based Open Core models break down when you move to multi-vendor foundations because the cross-licensing of IPR under GPL immediately infects the recipient codebase, and precludes commercial licensing of the resulting combined work. The result is that the GPL Open Core business model doesn’t work in the same way, and both Eucalyptus and Cloud.com cannot apply their current business model in these multi-vendor foundations. It is a big blow for Eucalyptus. They have turned their biggest potential customer into a massive and credible competitor, built in their own image (only – at least from a PR perspective – much more scalable).

In OpenStack the API is implemented in a separate service which translates external http requests into commands across the internal message bus, and so it looks (on the face of it) possible for someone (preferably Oracle) to implement the Oracle DMTF submission as a separable new API server module without disrupting the OpenStack architecture. In OpenStack the API is implemented in a separate service which translates external HTTP requests into commands across the internal message bus, and so it looks (on the face of it) possible for someone (preferably Oracle) to implement the Oracle DMTF submission as a separable new API server module without disrupting the OpenStack architecture.

The Wall Street Journal had an interesting article on the United States General Services Administration has approved the acquisition of some cloud services for use by the Federal Government including many of the Google Apps such as Gmail, Google Docs, etc. Since these services are for sale as well as freely available this sounds more like an admission that they can be used. Will other governments follow suit? But should they be used? That is really the question.

There are two sides to any government, the classified and the unclassified. These are general terms that quantify how the government can use services. While all services require quite a bit of security, classified utilization requires even more, in many cases what most would consider to be “uber-security” requirements. The types of requirements that impact usability in some way. Can these tools provide adequate security?

Whilst I have been away on vacation, something fairly interesting has happened in the area of Open Source initiatives for Infrastructure as a Service in the form of a new initiative from NASA and Rackspace called OpenStack. You may remember in our last post in this area, we noted that there was a proliferation of offerings in the IaaS space, and it was in the customer’s best interest for there to be effective migrateability (or even mix and match) amongst public and/or private clouds. However, the API standards to support interoperability are proving elusive.