When the VCE coalition first formed in late 2009 their product, the Vblock, was the industry’s first serious attempt at delivering converged IT systems. The first models were the Vblocks 0, 1, and 2, addressing the small, medium, and larger enterprise IT use cases. Over time, these evolved into the Vblock 300 and Vblock 700, relatively high-end computing options. On February 21, 2013 VCE announced the re-addition of smaller Vblock models, Vblock 100 and Vblock 200, once again allowing the product line to cover the small & medium-sized opportunities in the market. It’s been a bit over a month since VCE announced these changes to their product line, and with the products becoming generally available let’s look at some of the technical details, then use those details to make some conclusions about these products.
All Vblocks ship in their own 19 inch rack cabinets. With the 100 you now have a choice, though: 24U in the Vblock 100 BX models or 42U cabinets on the DX models. This directly affects future expandability, as each of the models is designed to accept additional storage and computing resources. The BX only has room for one more server, and three more disk enclosures. The DX is more expandable with up to 8 servers and four additional disk enclosures.
The two models have different disk arrays. The smaller BX has an EMC VNXe3150 disk array, while the DX has its slightly larger brother, the VNXe3300. Each array has the option of two different disk enclosures, one that is geared for performance, one geared towards capacity.
Physical network connectivity within the Vblock is IP-based, running through a pair of Cisco Catalyst 3750-X Ethernet switches, 24 gigabit ports on the BX and 48 ports on the DX, with a pair of 10 Gbps uplinks that connect to the VNXe storage controllers. Logical connectivity is supplied via the Cisco Nexus 1000V virtual switch running inside VMware vSphere. The Cisco UCS C220 M3 servers in use in these Vblocks have either 64 or 96 GB of RAM in the BX and DX models, respectively. Each server has two Intel E5-2640 CPUs, each with six cores. Up to half of the servers in a Vblock can be used as “bare metal,” without a hypervisor. Of course, on these smaller models that isn’t a lot.
The Vblock 200 follows the same basic recipe as all the other Vblocks, geared towards higher I/O than the Vblock 100. The storage has been replaced with an EMC VNX 5300, EMC’s mid-tier multiprotocol disk array option. Ethernet connectivity is supplied by a pair of Cisco Nexus 5548 10 Gbps switches.
Along with the Vblocks 100 and 200, VCE announced incremental refreshes of the Vblock 300 and 700 as well as a new management plug-in for VMware vSphere, VCE Vision. This software release finally starts delivering on the promise of integrated management, and treats the Vblock as a single managed unit rather than a delivered set of discrete components.
VCE no longer ignoring the smaller end of the market is nice, especially for enterprises that have high-end Vblocks in their main data center and would like to drop a smaller Vblock out at a remote site. VCE finally adding better management capabilities is a huge win, too. One of the promises of converged infrastructure is converged management, and having to manage a setup such as a Vblock as collection of discrete components runs counter to why many people purchase these solutions.
What also stands out to me as a problem is the density of these low end Vblocks. Most converged infrastructure vendors intentionally limit choice in order to drive up standardization and drive down price and total cost of ownership. That means limited options for expansion, and limited upgrade-ability of the components within, and is usually a justified tradeoff.
Generally speaking, virtual environments tend to consume RAM at a much faster pace than they do CPU, especially in the small & medium enterprise space. With no memory expansion options and limited server expansion options customers may find themselves out of capacity pretty quickly, despite having plenty of CPU and storage. Virtual desktops are a great example, where higher consolidation ratios are normal and it will be easy to outstrip the RAM capacity of these Vblocks. VCE also does not offer an option for SSD or flash storage, which is becoming a crucial part of even small VDI deployments.
It’s clear that VCE is selecting components to achieve a particular price point, but use of the C-series UCS servers instead of the more flexible and space-efficient B-series blades is a strange choice to me. Use of the B-series would also provide better networking opportunities than the ancient architecture of the Catalyst 3750 switches can provide, including possible connectivity with legacy data center infrastructure, like fibre channel SANs. As it stands, though, the Vblock is what it is, a unified IT island in your data center.