After months of beta testing boasting 12,000+ beta testers, VMware has finally thrown its hat into the hyperconverged virtualization space with the GA release of VMware vSphere 5.5 Update 1. Among numerous bug fixes to vSphere, this release contains the GA code for VMware VSAN. VSAN is a shared-nothing clustered storage technology embedded in the vSphere hypervisor, ESXi, that uses collections of local, direct-attached host storage to provide reliable and high-performing storage to a vSphere cluster. It relies on both traditional rotating disk media and modern flash and solid-state drive (SSD) storage, forming clusters of up to 32 ESXi hosts over 1 Gbps or, preferably, 10 Gbps network connections.
Back in mid-2011, Dell acquired RNA Networks, a small startup out of Portland, Oregon. At the time Dell purchased it, RNA had a product, MVX, that employed three different ways to pool memory across multiple servers in order to accelerate workloads. One was a way to pool memory as a storage cache in order to speed disk accesses using system RAM. In the spring of 2013, we saw some of these features emerge again as Dell’s Fluid Cache for DAS (direct-attach storage) morphed to use the incredible speed of PCIe-based SSDs instead of RAM. Now, in late 2013 at Dell World, we finally get what many of us have been waiting for: the announcement of the expected availability of Dell Fluid Cache for SAN.
The board of Calxeda, the company trying to bring low-power ARM CPUs to the server market, has voted to cease operations in the wake of a failed round of financing. This is completely unsurprising to me, for a few different reasons.
Virtualization is more suited to the needs of IT
Calxeda’s view of the world competed directly with server virtualization in many ways. Take HP’s Project Moonshot as an example. It is a chassis with hundreds of small ARM-based servers inside it, each provisioned individually or in groups, but with small amounts of memory and disk. The problem is that this sort of model is complicated, fragile, inflexible, and not standards-based. At the end of the day, organizations want none of these things. Calxeda’s solution may save an enterprise money by consuming less power, but it spends that money with increased OpEx elsewhere. In contrast, virtualization of larger, more powerful CPUs is more flexible on nearly every level, reduces the amount of hardware an enterprise must manage, and can help contain both capital and operational expenses while solving actual problems.
There are diminishing performance returns in extreme multi-core applications
Originally stated to convey the increasing value of a network as more nodes joined, another way Metcalfe’s Law can be expressed is that the communications overhead in a network grows as the square of the number of nodes in that network. This is also true in multi-threaded applications, where the amount of interprocess communication, locking, and other administrative work to coordinate hundreds of threads ends up consuming more CPU time than the actual computational work. Calxeda’s vision of hundreds of CPU cores in a single system was ambitious, and needed computer science and the whole industry to catch up to it. Enterprises don’t want research projects, so they choose fewer, faster cores and got their work done.
A limited enterprise market for non-x64 architectures
ARM isn’t x86/x64, so while there are increasing numbers of ARM-based Linux OS distributions, mostly thanks to the immense popularity of hobbyist ARM boards like Raspberry Pi and the BeagleBoard, none are commercially supported, which is a prerequisite for enterprises. On the Windows side there is Windows RT, which runs on 32-bit ARM CPUs, but it is generally regarded as lacking features and underpowered compared to other Atom-powered x86 devices that run full installations of Windows 8. Windows RT isn’t a server OS, either, and there is very little third-party software for it due to the complexity of developing for the platform and the lack of ROI for a developer’s time and money. Why put up with all the complexity and limitations of a different architecture when you can get a low-power x86-compatible Atom CPU and a real version of Windows?
A limited market for 32-bit CPUs
On the server front, which is what Calxeda was targeting, enterprises have been consuming 64-bit architectures since the release of AMD’s Opteron CPUs in 2003. Ten years later, the idea of using 32-bit CPUs seems incredibly backward. Even embedded systems want to have more than 4 GB of RAM on them, which is the maximum possible on 32-bit CPUs. On the mobile front, where ARM has had the most impact, Dan Lyons has a recent article about how Apple’s 64-bit A7 chip has mobile CPU vendors in a panic. Now, in order to compete with Apple, a handset maker wants a 64-bit chipset. Calxeda had a 64-bit CPU in the works, but it’s too far out to be useful in either market.
I’ve never really seen the point behind the “more smaller machines” movement, and I’m interpreting the end of Calxeda as evidence supporting my position. I’m sure there are specialized cases out there that make sense for these architectures, but the extreme limitations of the platform are just too much in the x64-dominated world of IT. In the end, Calxeda focused too tightly on specific problems, and in doing so ignored both the larger problems of the enterprise and the changes in the computing landscape that ultimately made them irrelevant.
A few weeks ago I had a chance to speak at length with Andres Rodriguez, the incredibly passionate founder and CEO of Nasuni. Nasuni is a highly innovative storage company providing storage infrastructure backed by the cloud. I’ve been writing a lot about caching and flash in virtual infrastructures, and went into the conversation thinking that they’d be another company improving storage performance with SSD, oh, and they had this cloud thing going on. Boy, was I wrong. After a lot of questions, I came away with a real respect for what they’re doing: attacking a number of big storage-related enterprise IT problems all at once. Continue reading Nasuni: Rethinking Every Aspect of Your Enterprise Storage
I’ve written in the near past about a number of different products that are helping enterprises use flash as a cache to accelerate their traditional storage workloads. One product that is helping to push the whole market forward, if only by raising awareness of the options in this space, is VMware’s own vSphere Flash Read Cache. Continue reading VMware vSphere Flash Read Cache
Everybody in IT knows by now that flash memory is redefining the enterprise storage industry, mostly by decoupling performance from capacity. Most storage vendors are happy to just add flash to their existing product lines, often using it as cache, or as a storage tier handled transparently within the array. Few vendors take the opportunity to rethink the way storage works, though, from the basics of performance to how it meshes with the idea of public & private clouds. Coho Data, coming out of stealth mode with their first product, the DataStream, does just that. Continue reading Coho Data DataStream
Join my Circle on Google+
Plugin by Social Author Bio