At this year’s OpenStack Summit in Austin, Texas, the message was clear. OpenStack needs to pivot from a science experiment to a production system. Even though this is happening, it has been happening slowly. Some would argue that it has been achieved for the likes of PayPal and other extremely large institutions, such as AT&T. However, running, configuring, and installing OpenStack still takes more knowledge than the average enterprise system administrator has available to them. The new Certified OpenStack Administrator certification is a way to exhibit a level of competence for the age of the new OpenStack: the production-ready OpenStack.
SDDC & Hybrid Cloud
Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
In my last article, I spent a little time talking about the difference between automation, which is an automated task or scripted solution to perform a task, and orchestration, which is the complete process. I topped it all off with a discussion about how DevOps is a philosophy driving orchestration. For this article, I want to focus in on the some of the most common tools of the trade behind the automation and orchestration for different types of environments.
In the industry, OpenStack is seen as very hard to implement. Considering this, I began to think that most people who deploy OpenStack try to bite off too a large chunk of OpenStack at one go, to implement it all instead of just what they need. OpenStack is a cloud management platform, not the hypervisor, so perhaps we can take some lessons from how we installed VMware products when we just started out. We still implement things using the same patterns for vSphere. We should revisit OpenStack with this history in mind.
One of the things we associate with existing IT infrastructure vendors is their determination to go it alone for a major portion of their businesses. Vendor each believe that their solution is the best. They feel that integrating with competing solutions is unnecessary. Oracle and Microsoft were the most well-known examples, happily attracting users with a locked-in architecture and using that dominance to stifle competition. VMware has also exhibited this trait. You may layer additional technologies on top of vSphere, but you cannot put another hypervisor under a VMware product. What we see in open source is a willingness to integrate with other solutions, even competing projects. We are seeing some signs of a change in VMware, but not the dramatic shift that Microsoft has made.
In part one of Cost to Build a New Virtualized Data Center, we discussed the basic software costs for a virtualized data center based on VMware vSphere 6.0, Citrix XenServer 6.5, Microsoft Hyper-V 2012 R2 and 2016, and Red Hat. If you missed that, please click here to review before continuing.