While getting much press, the Virtual Compute Environment coalition provided little in the way of detailed descriptions of the hardware involved. Recently however, VMware has published one reference architecture document for a Vblock 1 and VMware View 4 (VDI) that can be found here.

From a storage perspective this configuration uses:

  • EMC CLARiiON CX-4 disk arrays – No news here, but the exact model is detailed as model 480 with up to 471 TB of capacity.
  • Fibre Channel over Ethernet (FCoE) – also no news since with Cisco’s UCS FC I/O always starts with FCoE, but this quote, “Cisco’s Unified Computing System and Fiber Channel over Ethernet (FCoE) technologies are the backbone of the virtual infrastructure, providing a data center architecture for administrators that is easy to use and manage”, solidifies the role of FCoE.
  • A Cisco 6100 Fabric Interconnect to bridge from CEE/DCE to FC and loss-full Ethernet – we believe the exact model was a 6140 with 40 ports
  • Cisco MDS FC switches – again no news, but the lack of announced Brocade support limits the market.

However, the document does not say what kind of Converged Network Adapter (CNA) was used. With its UCS, Cisco offers three CNAs that all employ a mezzanine card, one uses chips from QLogic; another from Emulex. Both are intended to provide compatibility with existing SAN infrastructure. The third is Cisco’s own CNA, code named “Palo”, which was recently made generally available as the Cisco UCS M81KR Virtual Interface Card (VIC). The VIC presents up to 128 virtual interfaces to the operating system on a given blade. The 128 virtual interfaces can be dynamically configured by Cisco UCS Manager as either Fibre Channel or Ethernet devices. Unlike other network adapters, in which only the identity of each NIC and HBA is programmable through Cisco UCS Manager, both the type and the identity of each of the virtual NICs are programmable. Any combination of Ethernet NICs and Fibre Channel HBAs and their corresponding MAC addresses and WWNs can be programmed onto the card, making the I/O architecture of the server programmable which supports a dynamic provisioning model.

Many consider the VIC as the most important element in a Unified Fabric and it is likely that the VIC also part of this reference architecture. The problem is that neither VMware, nor any disk array vendor has explicitly announced support for it. Even this paper that details UCS, View and V-Max shows that a QLogic-based card was used rather than a VIC, even though there is quite some detail on the VIC.

Since, one of FCoE’s big selling points was preservation of existing SAN infrastructure, the lack of explicit guidance regarding CNAs and Vblocks is a further limitation to market adoption.

Share this Article:

Share Button
Nick Allen (8 Posts)

Nick is a veteran storage industry guru with more than 40 years' experience in information technology who now consults on best practices in information storage at The Tod Point Group which he founded. Previously, Nick spent 20 years with Gartner Inc. as Vice President and Research Director where he focused on information storage, storage networking and storage management. Prior to Gartner, he was a senior management consultant specializing in system planning, capacity and storage performance management. Previously, he worked in IT management, operating systems, and business and scientific applications. He also brings his storage expertise to bear at Wikibon.

Connect with Nick Allen:


Related Posts:

1 comment for “EMC/Cisco/VMware Vblock Reference Architecture – Some Storage Details Finally Emerge for Vblock 1

Leave a Reply

Your email address will not be published. Required fields are marked *


nine − 2 =