Right now is a particularly interesting time in the world of IT. Historically, IT has swung back & forth between centralization and decentralization, closed and open, tightly controlled and loosely controlled. Lately, though, a third option has cropped up: centralized control with decentralized workloads. In my opinion it’s a function of speed, implemented through bandwidth and processing capacity. We now have enough bandwidth between our devices to start treating the device in the next rack column like a slightly-less-local version of ourselves. We also have enough bandwidth that we’ve outstripped our need for separate storage and data networks, and can converge them into a single wire, running a single set of protocols (most notably TCP and IP). On the processing side, each node is basically a datacenter unto itself. 16, 32, 64 cores per server, terabytes of RAM. The advent of SSD and PCIe flash rounds out the package, lessening the need for large monolithic collections of spindles (aka “traditional storage arrays”). The problem then becomes one of control. How do we take advantage of the performance and cost that local processing brings, but maintain all the control, redundancy, and management benefits we had with a monolithic solution, while keeping the complexity under control? And while we usually talk about doing this at great scale, can we do this on a small scale, too?
It started with virtual memory, then virtual machines (CPUs), then virtual storage, and now I/O virtualization (IOV) – where the I/O path from the server to the peripheral is itself virtualized. Traditionally, I/O devices connect to the server with some sort Interface or adapter, e.g., NIC – Network Interface Card, HBA – Host Bus adapter, etc., which are located inside the physical server.
I/O virtualization moves the adapters out of the server and into to a switching box. This allows the adapters to be shared across many physical servers, which drives up adapter utilization – often less than 10%-15% in a non-virtualized world. Fewer adapters means less power and cooling. Also, adapters take up a lot of space in servers and moving them out of the server allows 1U servers to be used instead of 2U ones. Continue reading I/O Virtualization Shines at VMworld 2009