While a demonstration session at VMworld 2009 in San Francisco attracted much attention from the network and server virtualization community, it curiously got little attention from their storage counterparts. However, the demo showed what may be an important technical advance â€” a possible solution to the long-distance cache coherency and distributed lock management problem that has plagued the industry for decades â€” with little fanfare. If so, the storage vendor community should be taking more careful notice. A video of this standing-room-only session is available at Blip TV link.
In this joint â€śsuperâ€ť session (TA-3105), EMC, VMware, and Cisco demonstrated â€śLong Distance Live vMotionâ€ť. Cisco subsequently published a white paper on this and in the middle of that paper is the following curious statement:
â€śExtended VLAN and Active-Active Storage â€“ An extended VLAN and active-active storage solution incorporates technologies that make data actively available at both the local and remote data centers at all times. The LAN extends across the data centers, and storage is provisioned in both data centers. Data is replicated across data centers using synchronous replication technology andÂ rendered in an active-active state by the storage manufacturer. Normally when data is replicated, the secondary storage is locked by the replication process and is available to the remote server only in a read-only state. In contrast, active-active storage allows both servers to mount the data with read and write permissions as dictated by the VMware vMotion requirements.â€ť
One interesting thing about this paragraph is that it makes no reference to EMC , Cisco’s partner in the joint session. Another is that this is the only mention of storage in the paper; the reader is left to speculate on exactly what is meant.
On the EMC side, Chad Sakac of EMC discussed the session extensively in his blog entry, which includes the following on an Active/Active configuration in use case option 2 of the demo (slightly edited for clarity):
â€śOption 2 is a preview of something to come from EMC. We had a lot of internal debate about whether or not to show this â€“ as historically, EMC didnâ€™t show things prior to GA, though this is starting to change. We thought: there was a lot of interest; we had data on solution behavior; enough customers that would like Options 1a/b [two other use cases in the demo], but desire a faster transit time; the solution is relatively close. Based on all that, we decided we should share the current data and demonstrate it. This also allows us to start to get customer feedback on our approach.
Option 2 EMC has a primary locus of effort for this use case (as we think it meets all the requirements the most broadly) and will be the first one available from EMC as a â€śhardware acceleratedâ€ť option (it simply looks like a vMotion, the underlying storage mechanism is transparent to vSphere). â€¦I know that this is very exciting, butÂ PLEASE: donâ€™t immediately reach out to your EMC team and ask to get in on this â€“ it will only slow us down. Weâ€™re on it around the clock â€“ let us focus on finishing with the quality customers expect from EMC.â€ť
He discusses this further in the second video in the same post:
- â€śNote: changing datastore to one hosted at the secondary site. Both sets of datastores must be visible to each vSphere cluster.
- â€śOption 2: Long Distance vMotion with advanced Active/Active Storage â€“ â€śone that leverages technology coming from EMC aroundÂ active/active storage virtualization across distanceâ€ť.
VMware also reversed course and announced that it was now supporting this configuration.
What does Active/Active Mean in this Demo?
At first glance, using traditional thinking, one might assume that active/active means that two sides of a storage controller or cluster are active or that two storage subsystems at different locations are active. However, vMotion requires all ESX hosts to have read/write access to the same shared storage, so replication, in which only the secondary data center does not have read/write access will not work. Nor will other normal synchronization strategies. Instead, the demo showed two VM datastores at two locations that are treated as a single entity with both being R/W active. This raises some major questions since storage does not work well over narrow WAN links and requires either some kind of synchronization or a different way to present VM datastores on both sides.
Answering questions after the session at VMworld, Sakac explained that this is where EMCâ€™s solution starts to shine. It involves a SAN solution with additional layers of virtualization built into it, so two physically separated ESX/vSphere servers actually share RAM and CPU and storage that turns both hosts with storage into a single, logical entity. When doing a vMotion, no additional steps are needed on the vSphere platform to make all data (VMDKâ€™s, etc.) available at the secondary datacenter, as the SAN itself does all the heavy lifting. This technique operates completely transparent to the vSphere environment, as only a single LUN is presented to the two hosts. So a single vMotion and a â€ślogicalâ€ť storage vMotion (actually hyper-speed synchronous dual write) are combined into a single vMotion which only takes a few minutes or seconds to execute.
This sounds as if EMC has indeed solved the long-distance cache coherency and distributed lock management problem, at least over short distances on a two-node system with fairly small latencies. This demo was certainly small scale and far from a real-world environment.
This leaves several important questions unanswered: Will it scale, and if so how much? Can it work in a production environment? What happens if it loses a node? Does the storage split the data between nodes or keep synchronized copies of the full database at each physical location? EMC seems to be working to demonstrate that the system can scale, but only time will tell when or indeed whether this will be productised.