It has been around a decade since Dell and Red Hat’s collaboration, when they helped launch Red Hat Linux into the mainstream. Now, they have gotten back together to collaborate on an enterprise-grade version of OpenStack, based on the Havana version. This announcement recently followed another announcement from Red Hat that they would be bundling OpenStack with the Red Hat Enterprise Linux 6.5. Continue reading Dell and Red Hat Collaboration, Part 2
In my last couple of posts, I wanted to express my thoughts about the future of cloud computing. In the first post, I shared what appears to be a bright outlook for the future for people working in the cloud space, given the soaring demand for skilled engineers and not enough quality people to fill those roles. In my second post, I presented a couple of key skill areas that currently seem to have the most demand. I want to share my thoughts, or more to the point, concern, that this “gap” of skilled engineers is only going to increase unless we can help guide people off the hypervisor and into the cloud.
There has been a great deal of passionate debate over the last few months within the OpenStack community. There is one camp that is advocating for building APIs that are compatible with Amazon Web Services (AWS) APIs, while the other camp argues for augmenting the existing OpenStack APIs. Those in favor of making the APIs more compatible with AWS are focused on standardization and compatibility between OpenStack and AWS. Standardizing on the AWS APIs makes moving workloads between OpenStack and AWS clouds easier, thus giving OpenStack a competitive advantage over other private cloud stacks. It also makes it easier for customers to move workloads off of AWS (public cloud) to OpenStack (private cloud) for customers wanting to deploy on bare metal machines, keep critical data out of the public cloud, or have the flexibility to target a cloud endpoint based on their customers’ desires (for those delivering solutions to customers outside of their enterprise). Continue reading Moving Past the OpenStack API Debate
The Software Defined Data Center: That was pretty much the biggest takeaway from this year’s VMworld in San Francisco. VMware made announcements about the new vSAN that will be coming out soon to enhance software defined storage and about the NSX platform that addresses one of the final hurdles on the path to finally having a completely software defined datacenter, network virtualization. There have been plenty of write-ups on these topics, including one very good post from one of my colleges, Bernd Harzog. I am not going to go into any details on those announcements except to say that VMware is expanding and putting themselves in a good position to be the center of the virtual universe. I believe that it will take some time for software defined networking to really take off. My gut is telling me that it will be a slower process at first, just like the adoption of server virtualization, but when it does really take off, I believe the end result will have the potential to have just as great or even a greater legacy than server virtualization has. Continue reading The Software Defined Data Center
Nutanix, one of the fastest growing IT infrastructure startups around, shows no signs of slowing down with their release of Nutanix OS 3.5. For those not familiar with Nutanix, they offer a truly converged virtualized infrastructure. This generally consists of four nodes in two rack units of space, where each node has CPU, RAM, traditional fixed disk, SSD, and Fusion-IO flash built in. Their secret sauce is really NDFS, the Nutanix Distributed File System, built by the same folks that created Google’s File System, as well as a unified, hypervisor-agnostic management interface. Continue reading Nutanix OS 3.5: Deduplication, New GUI, SRM, Hyper-V Support
CERN goes hybrid: Have you heard the news that CERN is going to the cloud? The term CERN is used to refer to the European laboratory located in the northwest suburbs of Geneva snug on the Switzerland border. Its main function is to provide the particle accelerators and other laboratory infrastructure needed to perform high-energy physics research. CERN was originally established in 1954 as The European Organization for Nuclear Research. Research at the facility has moved past nuclear research, and it has fully expanded into one of the largest laboratories for particle physics research, using the Large Hadron Collider. On an interesting side note, the main site at CERN is also the birthplace of the World Wide Web; before that, these facilities were a major wide-area networking hub for sharing the scientists’ research with different scientists located elsewhere. Continue reading CERN Goes Hybrid