We are still coming to grips with the impact of the Xen and Bash shell issues that have sprung up lately. The issues are enough to make us realize that there are some serious pitfalls to cloud computing—or more to the point, pitfalls to using only one cloud service provider. We talk about using live…
Off of the hypervisor and get into the cloud: In my last couple of post I wanted to express my thoughts about the future of cloud computing. In the first post, I shared what appears to be a bright outlook of the future for people working in the cloud space with the soaring demand for skilled engineers and not enough quality people to fill those roles. In my second post, I presented a couple of key skill areas that currently seem to have the most demand but I want to share my thoughts, or more to the point, concern that this “gap” of skilled engineers in only going to increase unless we can help guide people off of the hypervisor and into the cloud.
The future of OpenStack looks bright, and with the all the software-defined data center (SDDC) features contained in the recent release of “Grizzly” they are now ready to compete toe-to-toe with heavyweights like VMware, Nutanix, Dell, and HP. Whether they can start unseating VMware products in the enterprise remains to be seen, though. Despite the immediate SDDC advantage of OpenStack, companies and technologies like that of Nicira and Virsto, both acquired by VMware, are not to be ignored.
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer area but will this concept really take off in the corporate world and really should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall foot print size as well as being able to take advantage and use all available compute resources available in a physical server and having centralized control of the compute, storage and networking resources.
Cloud Computing ...
• • 1 Comment
Piston Cloud Computing raised a few eyebrows on Tuesday with the announcement that it was extending its Piston Enterprise OS (PentOS) to provide a platform for hosting virtual desktops (VDI) through an exclusive licensing deal with Toronto-based Gridcentric for its innovative Virtual Memory Streaming (VMS) technology.
There is a class of applications that is extremely difficult to virtualize. This group consists of graphics intensive applications such as ProEngineer, Photoshop, and pretty much anything that requires a GPU to perform well. Graphics intensive applications make up a list of applications that are usually too big or expensive to virtualize. The last mile so to speak. This is NO longer the case. With NVIDIA’s announcement of the NVIDIA VGX Cloud Platform. This and other classes of applications can now be virtualized.
• • 2 Comments
Should software licensing be completely based off of the hardware MAC address of the NIC and or UUID of the mother board? This process worked very well before the introduction of virtualization but now that virtualization has become more prevalent in most environments. I think software venders really need to reconsider how they are going to license their software although it seems that some companies have not bought on to the idea of virtualization and would prefer to continue to support their product type to a specific hardware platform that the vender put together and shipped out. Can software venders hope to survive and remain current without embracing virtualization? I think the answer to that question is going to be no in the long run.