I have been doing some support work for an SMB that uses VMware Server where their VMs initially started as XenServer VMs, but due to networking and some other issues where converted to VMware VMs using V2V technology. However, these suddenly stopped working properly after an upgrade to VMware Server 2.
The problem now is that all the VMs crash when there is any type of disk or network load on the system, when they worked fine with previous versions of VMware Server. The unfortunate item is that not only did the version of VMware Server change but the version of the underlying RedHat Enterprise Linux also changed yet nothing changed within the VMs. So problem determination is difficult at best.
So why is this an issue? Backwards compatibility is a must for any system. Older VMs should not crash when running on newer systems.
The solution is as painful as it is laborious, each of this customers VMs need to be rebuilt. Apparently the problem lies within the IDE and network drivers used originally by the XenServer based VMs. The V2V never truly removed these drivers so that now when throughput is intensified, the VM crashes.
Since we are rebuilding the VMs, we are starting with 64bit VMware VMs as later this customer will move to either ESX or ESXi.
With the agility promised by the Cloud and VMs ever in motion, we the users of these products should not have to worry about backwards compatibility issues or V2V tools that do not clean up everything properly.
My desire for the new year is better compatibility between VMware, Hyper-V, Xen, and KVM VMs such that a VM can move between any hypervisor without requiring laborious rebuilds. We should work towards hypervisor independence for the future of cloud computing and the promise of agility.