The Emergence of Hyperconvergence

When it comes right down to it, most converged infrastructures we’ve seen so far can be summarized with three words: gilding the jalopy.

“That’s a bold statement,” you might be saying. And you’d be right. But I’ve spent years looking at what’s being shipped by VCE, Cisco, Dell, NetApp, and others, and it’s all just the same things an enterprise IT shop would have purchased themselves. The only convergence is the gilding on the front end, in the form of a glitzy (or not) self-service portal that ultimately ends up limiting your flexibility and options.

On top of that, the “converged” infrastructures usually converge absolutely nothing. On the back end, you still find separate Fibre Channel and IP fabrics. Or maybe they’re “converged” using complicated and ridiculous kludges like data center bridging (DCB) and Fibre Channel over Ethernet (FCoE). Regardless, you still have discrete compute and discrete storage, and all the same basic hardware and software components you would have acquired and installed yourself. Except this was one SKU to order, and someone else put it together for you. Which is actually unfortunate, because as the front-line support for the unit, you have a serious knowledge deficit if you didn’t set it up yourself. Vendor training is one thing, but actual experience is what you need.

If you had done it yourself, you might have ditched Fibre Channel and actually converged on an IP-only solution, perhaps choosing one of the best-of-breed modern storage arrays, like those from Tintri, Tegile, Coho Data, Nimble Storage, or SolidFire (in no particular order). You might have chosen a blade solution for compute like the Dell M1000e, which has a lot of flexibility and high-end options when it comes to internal networking. You also might have chosen your own private cloud–like front end, something like Embotics vCommander, which strives to make IT vastly simpler for administrators and users, far and above anything that has ever been delivered by VMware or Microsoft or the OpenStack community.

If you’d done it yourself, you would have had a less complicated, more flexible, and cheaper solution that was actually more “converged.” But sometimes it’s just easier, politically or otherwise, to buy a converged solution as a greenfield and be done, even if the operational gains aren’t going to be what you were looking for.

Glitzy Jalopies, Meet Hyperconverged Infrastructure

We’re at an inflection point in IT, though: a crossroads where smart people who see all these problems meet copious CPU time, which has led to software-defined everything. It’s hyperconvergence—perhaps Convergence: The Revenge if this were a movie sequel. Nutanix and SimpliVity and Scale Computing and Piston Cloud Computing deliver hyperconverged infrastructures, in which compute and storage and virtualization are not separate, as each rack unit of space is efficiently crammed full of RAM, CPU, SSD, and spinning disk. All the nodes communicate over IP with each other, so you don’t need obsolete Fibre Channel switches, just a pair of 10 Gbps IP switches. There’s a unified front end for it all, but it isn’t just gilding: it’s actually the software-defined heart of the beast. And on your own, you cannot build or assemble the sort of goodness it delivers.

Hyperconvergence is where it’s at, because it actually delivers on the promises the original converged architectures made: vast gains in operational efficiency, which, in turn, leads to better service delivery and lower overall costs. As a result, any hardware company that isn’t selling a hyperconverged solution now has a clock on it. And frankly, any organization that’s thinking about converged solutions ought to ask some tough questions about what it’s looking at. Even if you bought a traditional converged solution today, I guarantee that your replacement for it in five years will be hyperconverged. Why not just start now?