All posts by Bob Plankers

Bob Plankers is an IT generalist with direct knowledge in many areas, such as storage, networking, virtualization, security, system admistration, and data center operations. He has 17 years of experience in IT, is a three-time VMware vExpert and blogger, and serves as the virtualization & cloud architect for a major Midwestern university.

The SolidFire Storage System: Reducing Complexity at Scale

A few weeks ago, I had the opportunity to get to know SolidFire and its Storage System product, an all-flash array designed for on-premises use but with some attractive cloud functionality. At first glance, the array looks like many of those in the all-flash space. It’s built out of discrete nodes; uses 10 Gbps networking to present iSCSI; does inline deduplication, compression, and best-effort real-time background replication; has copy-on-write, space-efficient snapshots; and scales linearly.

Continue reading The SolidFire Storage System: Reducing Complexity at Scale

News: VMware VSAN GA Release Ships with vSphere 5.5 Update 1

VMware100x30After months of beta testing boasting 12,000+ beta testers, VMware has finally thrown its hat into the hyperconverged virtualization space with the GA release of VMware vSphere 5.5 Update 1. Among numerous bug fixes to vSphere, this release contains the GA code for VMware VSAN. VSAN is a shared-nothing clustered storage technology embedded in the vSphere hypervisor, ESXi, that uses collections of local, direct-attached host storage to provide reliable and high-performing storage to a vSphere cluster. It relies on both traditional rotating disk media and modern flash and solid-state drive (SSD) storage, forming clusters of up to 32 ESXi hosts over 1 Gbps or, preferably, 10 Gbps network connections.

Continue reading News: VMware VSAN GA Release Ships with vSphere 5.5 Update 1

Dell Fluid Cache for SAN

Click to expand

Back in mid-2011, Dell acquired RNA Networks, a small startup out of Portland, Oregon. At the time Dell purchased it, RNA had a product, MVX, that employed three different ways to pool memory across multiple servers in order to accelerate workloads. One was a way to pool memory as a storage cache in order to speed disk accesses using system RAM. In the spring of 2013, we saw some of these features emerge again as Dell’s Fluid Cache for DAS (direct-attach storage) morphed to use the incredible speed of PCIe-based SSDs instead of RAM. Now, in late 2013 at Dell World, we finally get what many of us have been waiting for: the announcement of the expected availability of Dell Fluid Cache for SAN.

Continue reading Dell Fluid Cache for SAN

4 Reasons The Calxeda Shutdown Isn’t Surprising

DataCenterVirtualizationHP Moonshot SystemThe board of Calxeda, the company trying to bring low-power ARM CPUs to the server market, has voted to cease operations in the wake of a failed round of financing. This is completely unsurprising to me, for a few different reasons.

Virtualization is more suited to the needs of IT

Calxeda’s view of the world competed directly with server virtualization in many ways. Take HP’s Project Moonshot as an example. It is a chassis with hundreds of small ARM-based servers inside it, each provisioned individually or in groups, but with small amounts of memory and disk. The problem is that this sort of model is complicated, fragile, inflexible, and not standards-based. At the end of the day, organizations want none of these things. Calxeda’s solution may save an enterprise money by consuming less power, but it spends that money with increased OpEx elsewhere. In contrast, virtualization of larger, more powerful CPUs is more flexible on nearly every level, reduces the amount of hardware an enterprise must manage, and can help contain both capital and operational expenses while solving actual problems.

There are diminishing performance returns in extreme multi-core applications

Originally stated to convey the increasing value of a network as more nodes joined, another way Metcalfe’s Law can be expressed is that the communications overhead in a network grows as the square of the number of nodes in that network. This is also true in multi-threaded applications, where the amount of interprocess communication, locking, and other administrative work to coordinate hundreds of threads ends up consuming more CPU time than the actual computational work. Calxeda’s vision of hundreds of CPU cores in a single system was ambitious, and needed computer science and the whole industry to catch up to it. Enterprises don’t want research projects, so they choose fewer, faster cores and got their work done.

A limited enterprise market for non-x64 architectures

ARM isn’t x86/x64, so while there are increasing numbers of ARM-based Linux OS distributions, mostly thanks to the immense popularity of hobbyist ARM boards like Raspberry Pi and the BeagleBoard, none are commercially supported, which is a prerequisite for enterprises. On the Windows side there is Windows RT, which runs on 32-bit ARM CPUs, but it is generally regarded as lacking features and underpowered compared to other Atom-powered x86 devices that run full installations of Windows 8. Windows RT isn’t a server OS, either, and there is very little third-party software for it due to the complexity of developing for the platform and the lack of ROI for a developer’s time and money. Why put up with all the complexity and limitations of a different architecture when you can get a low-power x86-compatible Atom CPU and a real version of Windows?

A limited market for 32-bit CPUs

On the server front, which is what Calxeda was targeting, enterprises have been consuming 64-bit architectures since the release of AMD’s Opteron CPUs in 2003. Ten years later, the idea of using 32-bit CPUs seems incredibly backward. Even embedded systems want to have more than 4 GB of RAM on them, which is the maximum possible on 32-bit CPUs. On the mobile front, where ARM has had the most impact, Dan Lyons has a recent article about how Apple’s 64-bit A7 chip has mobile CPU vendors in a panic. Now, in order to compete with Apple, a handset maker wants a 64-bit chipset. Calxeda had a 64-bit CPU in the works, but it’s too far out to be useful in either market.

I’ve never really seen the point behind the “more smaller machines” movement, and I’m interpreting the end of Calxeda as evidence supporting my position. I’m sure there are specialized cases out there that make sense for these architectures, but the extreme limitations of the platform are just too much in the x64-dominated world of  IT. In the end, Calxeda focused too tightly on specific problems, and in doing so ignored both the larger problems of the enterprise and the changes in the computing landscape that ultimately made them irrelevant.

Nasuni: Rethinking Every Aspect of Your Enterprise Storage

NasuniA few weeks ago I had a chance to speak at length with Andres Rodriguez, the incredibly passionate founder and CEO of Nasuni. Nasuni is a highly innovative storage company providing storage infrastructure backed by the cloud. I’ve been writing a lot about caching and flash in virtual infrastructures, and went into the conversation thinking that they’d be another company improving storage performance with SSD, oh, and they had this cloud thing going on. Boy, was I wrong. After a lot of questions, I came away with a real respect for what they’re doing: attacking a number of big storage-related enterprise IT problems all at once. Continue reading Nasuni: Rethinking Every Aspect of Your Enterprise Storage

VMware vSphere Flash Read Cache

VMware100x30  I’ve written in the near past about a number of different products that are helping enterprises use flash as a cache to accelerate their traditional storage workloads. One product that is helping to push the whole market forward, if only by raising awareness of the options in this space, is VMware’s own vSphere Flash Read Cache. Continue reading VMware vSphere Flash Read Cache

Google Circle
Join my Circle on Google+

Plugin by Social Author Bio