The Virtualization Practice

Tag Archive for Cloud

Amazon failed because of simultaneous failure of its EBS in two Availability zones. If you were dependent on one of these (or mirrored across the two) you lost access to the filesystem from your Instances. It may be sensible to move to the use of the S3 mechanism (or some portable abstraction over it) for new applications, but if you have an existing application that expects to see a filesystem in the traditional way, Gluster can provide a distributed cloud-agnostic shared filesystem with multi-way replication (including asynchronous replication).

A Service Level Agreement (SLA) is an excellent expectations-managing mechanism, but it’s important to manage your own expectations of what an SLA can realistically accomplish. Just those three words “Service” “Level” and “Agreement” is often an attention turn-off I know: SLAs are to infrastructure bods what documentation is to developers. Yet, when considering taking up cloud and utility services many consider that the SLAs offered aren’t reliable, if they exist at all. So the SLA becomes the blocker – ‘If I move services out of my data centre, how will I guarantee availability and performance’.

Amazon’s Service Level Agreement (SLA) is so narrowly-drawn that it could easily be argued that the recent Elastic Block Store (EBS) outage wasn’t a failure of Amazon Web Services at all. Anyone using EBS in a production environment was, arguably, reaping the fruits of their own folly. Of course they don’t tell you when you read the hype that architecting for resilience in the Cloud is actually very complicated, particularly if you want to take the sensible step of not relying on a single provider like Amazon, no-matter how dominant their hype may be.

I was reading the post Small Business Virtualization and that really got me thinking about Small to Medium Businesses and what part Cloud Computing will play in that market. There are plenty of small businesses in and around my area and I have a couple of friends that are the owners of a couple of these small businesses. A majority of these small businesses have a single or a couple of point of sale machines that feed to the accounting program. It is these businesses that I think of when I think of what a small business is. Would virtualization help these companies? Sure, I think so but would it really be worth the cost to setup and maintain?

Facebook (which had previously bought commodity servers and rented data center space) has opened up a whole new area of Open Source technology by publishing the full specification of both its new custom server and its new data center as “Open Source” at OpenCompute.org. Overall, Facebook claims that its new data centers are 38 per cent more efficient than its existing leased data centers, but the cost is about 20 per cent less. Published data (such as it exists) indicates that Facebook is at or ahead of rivals or peers such as Microsoft and Google. OpenCompute designs are released under new set of Open Source agreements. The intent seems to be to allow innovation within the published specification, but to ensure multiple providers of the technology. Facebook is clearly seeking to get multiple tier-1 third-party providers for both servers and data centers according to these designs, turning these Open Source specifications into a form of de-facto Standard, which could have broad impact by driving the marketplace away from shared storage models (such as Red Hat’s IAAS reference architecture) to local-storage-friendly IAAS architectures such as OpenStack or Eucalyptus.

In July 2009 I wrote an article entitled Cloud Computing Providers — are they content providers or carriers? and in January of 2011 Chuck Hollis wrote an article Verizon To Acquire Terremark — You Shouldn’t Be Surprised. Now with the Terremark acquisition almost complete and RSA Conference 2011 also over, at which I talked to Terremark about the benefits of belonging to Verizon, a picture is starting to emerge. Yes, my predictions in 2009 make sense and still hold forth today, but is there more of an impact than we realize?

MokeFive Suite is an enterprise desktop management platform that is used to create and administer layered virtual desktop images called ‘LivePCs’ which execute as guests on a type II hypervisor. LivePC images are authored using the MokaFive Creator which also serves as a test platform to simulate and end-users experience. LivePC images can be stored on centralized or distributed file stores. MokaFive also provides support for Amazon S3 storage, which can be of significant value in managing highly distributed environments, or run directly off USB flash drives. MokaFive LivePCs are effectively hypervisor agnostic; support is currently available for VMware’s free Player and the open source Virtual Box. Beta support for Parallels Workstation is new in MokaFive Suite 3.0, and MokaFive’s own bare metal platform will be shipping in Q1 2011.