While getting much press, the Virtual Compute Environment coalition provided little in the way of detailed descriptions of the hardware involved. Recently however, VMware has published one reference architecture document for a Vblock 1 and VMware View 4 (VDI) that can be found here.
From a storage perspective this configuration uses:
While a demonstration session at VMworld 2009 in San Francisco attracted much attention from the network and server virtualization community, it curiously got little attention from their storage counterparts. However, the demo showed what may be an important technical advance — a possible solution to the long-distance cache coherency and distributed lock management problem that has plagued the industry for decades — with little fanfare. If so, the storage vendor community should be taking more careful notice. A video of this standing-room-only session is available at Blip TV link. Continue reading EMC Hints at Storage Technology Breakthrough in VMworld Demo
FCoE (Fibre Channel over Ethernet) is a relatively new industry effort designed to combine the lossless features of FC with the ubiquity of Ethernet. FCoE is essentially Fibre Channel (FC) frames encapsulated in Ethernet packets using Ethernet links instead of Fibre Channel links. Nonetheless, at the upper layers it is still Fibre Channel which allows for the preservation of existing FC infrastructures – a major design goal. FCoE allows storage and network traffic to be converged onto one set of cables, switches and adapters thereby reducing cables, energy consumption and heat generation. Storage management using an FCoE interface has the same look and feel as storage management with traditional FC interfaces. Nonetheless, FCoE is Layer 2 only and this fact greatly impacts its capabilities. This industry standards effort depends on the coordinated work of three standards bodies:
- IEEE for Ethernet extensions
- INCITS/ANSI T11committee for the Fibre Channel protocols
- IETF for routing
The Good News Continue reading FCoE Update – The Good News and The Bad News
Server I/O remains a big challenge in virtualized environments. The results of an interesting online survey recently done by Xsigo Systems highlights these challenges. They show that users continue to suffer I/O bottlenecks, I/O related outages and spend too much time on I/O issues and cable maintenance. This comes as no surprise as the consolidation brought by server virtualization puts even more demands on I/O infrastructures. In our article on I/O virtualization we highlighted emerging vendors at VMworld 2009 that address these issues – particularly cabling and I/O related outages. Continue reading Xsigo IT Survey Reveals Need for I/O Virtualization in Today’s Virtualized Data Center
In a recent blog post, Nick Triantos sneak previews NetApp’s SnapManager for Hyper-V (SMHV). SnapManager is a popular product that automates and manages the creation, restoration and deletion of hardware based point-in-time snapshot copies provided by NetApp’s storage systems.
SMHV is backup and recovery software that enables backups of VMs as a group(s) according to protection policies set by the backup administrator as well as recover these VMs individually. It provides integration with the Microsoft Hyper-V VSS (Volume Shadow Copy Services) writer to quiesce the Hyper-V VMs before taking a consistent snapshot of the target virtual machines. SMHV also provides a VSS requestor component that Continue reading NetApp Previews SnapManager for Hyper-V; Grows Presence in Microsoft’s Ecosystem
It started with virtual memory, then virtual machines (CPUs), then virtual storage, and now I/O virtualization (IOV) – where the I/O path from the server to the peripheral is itself virtualized. Traditionally, I/O devices connect to the server with some sort Interface or adapter, e.g., NIC – Network Interface Card, HBA – Host Bus adapter, etc., which are located inside the physical server.
I/O virtualization moves the adapters out of the server and into to a switching box. This allows the adapters to be shared across many physical servers, which drives up adapter utilization – often less than 10%-15% in a non-virtualized world. Fewer adapters means less power and cooling. Also, adapters take up a lot of space in servers and moving them out of the server allows 1U servers to be used instead of 2U ones. Continue reading I/O Virtualization Shines at VMworld 2009