There has been a great deal of passionate debate over the last few months within the OpenStack community. There is one camp that is advocating for building APIs that are compatible with Amazon Web Services (AWS) APIs, while the other camp argues for augmenting the existing OpenStack APIs. Those in favor of making the APIs more compatible with AWS are focused on standardization and compatibility between OpenStack and AWS. Standardizing on the AWS APIs makes moving workloads between OpenStack and AWS clouds easier, thus giving OpenStack a competitive advantage over other private cloud stacks. It also makes it easier for customers to move workloads off of AWS (public cloud) to OpenStack (private cloud) for customers wanting to deploy on bare metal machines, keep critical data out of the public cloud, or have the flexibility to target a cloud endpoint based on their customers’ desires (for those delivering solutions to customers outside of their enterprise).
Articles Tagged with OpenStack
The Software Defined Data Center: That was pretty much the biggest takeaway from this year’s VMworld in San Francisco. VMware made announcements about the new vSAN that will be coming out soon to enhance software defined storage and about the NSX platform that addresses one of the final hurdles on the path to finally having a completely software defined datacenter, network virtualization. There have been plenty of write-ups on these topics, including one very good post from one of my colleges, Bernd Harzog. I am not going to go into any details on those announcements except to say that VMware is expanding and putting themselves in a good position to be the center of the virtual universe. I believe that it will take some time for software defined networking to really take off. My gut is telling me that it will be a slower process at first, just like the adoption of server virtualization, but when it does really take off, I believe the end result will have the potential to have just as great or even a greater legacy than server virtualization has.
Nutanix, one of the fastest growing IT infrastructure startups around, shows no signs of slowing down with their release of Nutanix OS 3.5. For those not familiar with Nutanix, they offer a truly converged virtualized infrastructure. This generally consists of four nodes in two rack units of space, where each node has CPU, RAM, traditional fixed disk, SSD, and Fusion-IO flash built in. Their secret sauce is really NDFS, the Nutanix Distributed File System, built by the same folks that created Google’s File System, as well as a unified, hypervisor-agnostic management interface.
CERN goes hybrid: Have you heard the news that CERN is going to the cloud? The term CERN is used to refer to the European laboratory located in the northwest suburbs of Geneva snug on the Switzerland border. Its main function is to provide the particle accelerators and other laboratory infrastructure needed to perform high-energy physics research. CERN was originally established in 1954 as The European Organization for Nuclear Research. Research at the facility has moved past nuclear research, and it has fully expanded into one of the largest laboratories for particle physics research, using the Large Hadron Collider. On an interesting side note, the main site at CERN is also the birthplace of the World Wide Web; before that, these facilities were a major wide-area networking hub for sharing the scientists’ research with different scientists located elsewhere.
In the world of DevOps, the names Chef and Puppet have been synonymous with systems and configuration management in the cloud. But now it is time to make room for a third synonym, Salt. Salt, short for SaltStack, was started in the basement of Founder and CTO Tom Hatch. Tom had been building and administering clouds for a while and was frustrated with some of the complexities and performance issues of the existing tools that he was working with. Tom set out to create a framework that could get real-time information about infrastructure and communicate to servers faster than anything out there. Writing in Python, Tom built a framework that executes many times faster than the competition. In fact one client reported that moving to Salt took the previous deployment process of 18,000 nodes from 15 minutes down to 5 seconds.
At EMC World 2013, EMC announced ViPR as the answer to storage within the software defined data center. ViPR presents multiple types of storage while segmenting the control plane from the data plane. In addition, ViPR is a head end, fronting traditional storage arrays as an automation and control point and does not replace any array, but, possibly, makes it easier to use those arrays as we move to the software defined data center. Yet, ViPR also raises several questions about how storage will be accessed by the software defined data center: is ViPR the future, or is there more to happen?