When server virtualization started to get its foothold, one of the key reasons for going virtual was the ROI that could be saved from running many servers on one physical box. It would make logical sense that this same key point can be applied to other aspects of virtualization and now we are really seeing the consolidation within the I/O area. This is the point where virtual I/O will really start to take off. After all, haven’t we all seen this nightmare during our career?
There seem to be three styles of IO Virtualization (IOV) taking place within the virtual environment. At VMworld, the IO Virtualization companies were out and talking to people about their wares, products, and approaches to IO Virtualization. These three methods are:
- Converged Network Adapters used within Cisco UCS, HP Matrix, etc.
- Attached IOV top of rack devices such as the Xsigo Device
- PCIe Extenders
Each of these provide unique benefits to your virtual environment but which to use? First, we need to know what each of these approaches brings to the table.
Over the past year or so I have been thinking pretty heavily about the direction networking is taking within virtualization. In some ways, it appears security has been forgotten or relegated to ‘encrypt’ and forget. However, it takes quite a bit of knowledge and time to properly set up the backbone of an ‘encrypt’ and forget approach to network security, so it does not happen automatically. Instead, we have a proliferation of technologies being used to cut down on cable clutter and thereby consolidate the network. These are all very important concepts. Security practitioners like myself realize that this type of consolidation WILL happen. So what tools are required to either ‘encrypt and forget’ or to protect these consolidated networks?
While getting much press, the Virtual Compute Environment coalition provided little in the way of detailed descriptions of the hardware involved. Recently however, VMware has published one reference architecture document for a Vblock 1 and VMware View 4 (VDI) that can be found here.
From a storage perspective this configuration uses:
- EMC CLARiiON CX-4 disk arrays – No news here, but the exact model is detailed as model 480 with up to 471 TB of capacity.
FCoE (Fibre Channel over Ethernet) is a relatively new industry effort designed to combine the lossless features of FC with the ubiquity of Ethernet. FCoE is essentially Fibre Channel (FC) frames encapsulated in Ethernet packets using Ethernet links instead of Fibre Channel links. Nonetheless, at the upper layers it is still Fibre Channel which allows for the preservation of existing FC infrastructures – a major design goal. FCoE allows storage and network traffic to be converged onto one set of cables, switches and adapters thereby reducing cables, energy consumption and heat generation. Storage management using an FCoE interface has the same look and feel as storage management with traditional FC interfaces. Nonetheless, FCoE is Layer 2 only and this fact greatly impacts its capabilities. This industry standards effort depends on the coordinated work of three standards bodies:
- IEEE for Ethernet extensions
- INCITS/ANSI T11committee for the Fibre Channel protocols
- IETF for routing
The Good News
After ratification of the Fibre Channel over Ethernet (FCoE) standard this past June, this new storage networking technology is on a fast track for adoption. The standard has gone from proposal to approval in approximately two years, way quicker than average. Now, NetApp is the first vendor to be shipping native FCoE storage systems.