Intel has announced that it will start shipping chips based upon 3D transistors in 2012 – and using this design across its product line. This extends the benefits of the continuation of Moore’s law across computing platforms ranging from servers to smart phones.
Cloud Computing ...
• • 1 Comment
Over the last few weeks, VMware (as we indicated in an earlier post) and Red Hat have initiated two very similar initiatives known respectively as CloudFoundry and OpenShift. These are Platform as a Service (PaaS) plays, being developed for the longer term, primarily looking to encourage the development of (and thereafter to provide infrastructure for) applications specificallysuited to the the cloud. In this article we compare and contrast the two offerings and discuss their significance for the PaaS market as a whole.
VMTurbo has delivered a new free vSphere performance and capacity management solution that is neither time nor size of environment limited, and that breaks new ground in terms of capacity management functionality delivered in a free solution. The automatically generated VM Rightsizing Recommendations should prove to be of particular value to vSphere administrators.
Public Cloud SLA’s are worthless. They need to be replaced by metrics that measure the responsiveness of what the cloud provider owns to the layer of software from the customer running in the cloud. Developing these metrics will require significant changes to existing APM approaches in order to be able to separate time spent in the application from time spent in the application framework or OS.
Mike DiPetrillo’s post entitled VMware is Building Clouds sparked some interesting thoughts and discussion about what it means to have federated clouds and how do you define such federation? Is federated required to make ‘cloud’ ubiquitous or are we already there? But is the discussion really about federated clouds or simplistic data object movement between the VMs or about cloud management?
A Service Level Agreement (SLA) is an excellent expectations-managing mechanism, but it’s important to manage your own expectations of what an SLA can realistically accomplish. Just those three words “Service” “Level” and “Agreement” is often an attention turn-off I know: SLAs are to infrastructure bods what documentation is to developers. Yet, when considering taking up cloud and utility services many consider that the SLAs offered aren’t reliable, if they exist at all. So the SLA becomes the blocker – ‘If I move services out of my data centre, how will I guarantee availability and performance’.
Amazon’s Service Level Agreement (SLA) is so narrowly-drawn that it could easily be argued that the recent Elastic Block Store (EBS) outage wasn’t a failure of Amazon Web Services at all. Anyone using EBS in a production environment was, arguably, reaping the fruits of their own folly. Of course they don’t tell you when you read the hype that architecting for resilience in the Cloud is actually very complicated, particularly if you want to take the sensible step of not relying on a single provider like Amazon, no-matter how dominant their hype may be.
Running VMware on legacy infrastructure is like driving a Ferrari on a gravel road. If you look at what is run in most production VMware environments today, the only really new things in the environment is VMware vSphere, and possibly some new monitoring, security and backup tools. We have barely started to reinvent everything that needs to be reinvented in order to properly take virtualization, IT as a Service and public clouds to their logical and most beneficial conclusions.
At the InfoSec World 2011 conference, in the sessions I attended, there was quite a bit of discussion about moving to the cloud as well as cloud outages.