There are different public cloud use cases. Here at The Virtualization Practice we moved our datacenter from the north to the south part of the country and utilized the cloud to host the workloads during the transition. Edward Haletky, yesterday posted about Evaluating the Cloud: Keeping your Cloud Presence and presented the question and his thoughts of is it worth staying in the cloud or bringing the data home.
Articles Tagged with Private Cloud
Soon the backup power will be available for our new datacenter and the redesign to make use of VMware vCloud Suite is nearing completion. Soon, our full private cloud will be ready for our existing workloads. These workloads however now run within a XenServer based public cloud. So the question is, do we stay in a poorly performing public cloud (mentioned in our Public Cloud Reality series) or move back to our own private cloud? As the Clash put it “Should I Stay or Should I Go Now.”
It is time to expand the virtual playing field. Since the release of both Hyper-V 2012 and vSphere 5.1, there have been an abundant amount of posts comparing the two hypervisors in a head to head fashion. All the different charts, graphs, and tables point to the fact that when comparing maximum values head to head. This has been the way the two different hypervisors have been compared against each other all along and Microsoft and VMware have gotten to the point where things are pretty much even across the board. It was just a matter of time until we got to this point where Hyper-V catches up with vSphere and now that we have, I believe we need to change the scope of the comparisons beyond the maximum values. After all, how many people actually get anywhere close to those maximums deployed in your production environments? “Just because you can do something, does not mean you should.”
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer market, but will this concept really take off in the corporate world, and really, should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall footprint, as well as to be able to take advantage of and use all available compute resources in a physical server, and to have centralized control of the computer, storage, and networking resources.
There seems to be a myriad of definitions of who is a tenant when it comes to secure multi-tenancy. This debate has occurred not only within The Virtualization Practice as well as at recent Interop and Symantec Vision conferences I attended. So who really is the tenant within a multi-tenant environment? It appears multiple definitions exist and if we cannot define Tenant, then how do you build secure applications that claim to be multi-tenant?