We are still coming to grips with the impact of the Xen and Bash shell issues that have sprung up lately. The issues are enough to make us realize that there are some serious pitfalls to cloud computing—or more to the point, pitfalls to using only one cloud service provider. We talk about using live migration and other tools to alleviate downtime, but have we really thought through the use of these tools at cloud scale? What was the impact on your environment, and how have you decided to alleviate that impact? Those are the questions that come out of the latest set of issues with cloud computing.
Articles Tagged with Xen
In my last couple of posts, I wanted to express my thoughts about the future of cloud computing. In the first post, I shared what appears to be a bright outlook for the future for people working in the cloud space, given the soaring demand for skilled engineers and not enough quality people to fill those roles. In my second post, I presented a couple of key skill areas that currently seem to have the most demand. I want to share my thoughts, or more to the point, concern, that this “gap” of skilled engineers is only going to increase unless we can help guide people off the hypervisor and into the cloud.
The OpenStack Summit this week continued to fan the flames of the software-defined data center. The software-defined data center is just a term for replacing traditional data center hardware functionality with the same features implemented in software, running on commodity x86 servers. While software-defined approaches to data center features are at least nominally less expensive than their hardware counterparts the real promise in the approach is flexibility and management ease with high levels of integration. Reconfiguring a network to support the security requirements of a new application is now just a function of software and APIs. Expanding storage is just simply adding another node with more storage attached, and the cluster compensates automatically. Even things like firewall rules and load balancer configurations can now be stored as templates along with the applications, to be provisioned in minutes.
The Virtualization Practice recently moved their systems to the Cloud, being cost conscious we chose one of the public clouds to use. The reality of such a move is much different than the hype. We expected stellar support, better performance, improved security, improved DR, and 5 9s uptime, and the hypervisor is a commodity. In essence, it should be better than we could do ourselves. That is the promise of the cloud; the hype of the cloud. What we have seen is something far different.
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer market, but will this concept really take off in the corporate world, and really, should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall footprint, as well as to be able to take advantage of and use all available compute resources in a physical server, and to have centralized control of the computer, storage, and networking resources.
While looking around the web for anything new with virtualization, I kept seeing more and more posts and articles about the new type of virtual hypervisor. Type 0, now this sounds interesting and I found these definitions for each type of hypervisor.