In Virtualization Performance Management – What if we Started Over, we suggested that in order for virtualized environments to become great platforms for business critical and performance critical applications, that much of the infrastructure that supports virtualization might have to be reinvented. The assertion behind this suggestion is that running dynamic virtualized and cloud based workloads on legacy infrastructure is like driving a Ferrari on a gravel road – you can do it, but you will not be taking advantage of the Ferrari while doing so. We are now starting to see signs that some very bright and experienced technical people are getting together with leaders in the venture capital community to start to make this happen.The key points of the previous “What if we Started Over” article were:
- People who own and are responsible for business critical and performance critical applications see little to no benefit to moving their applications from dedicated hardware to shared virtualized infrastructures
- The people that own the virtualized infrastructures are going to have to guarantee the performance of these applications on their infrastructure in order to get consent from the applications owners for their applications to become virtualized
- That in a virtualized environment, performance is equal to applications response time, and infrastructure latency, not resource utilization. This was explored in detail in this post.
- That in order for the infrastructure to become instrumented end-to-end from a latency perspective, that the infrastructure was going to have to participate in the process of measuring infrastructure response time (latency). In other words, the infrastructure was going to have to become self-instrumenting.
We are now starting to see quite a bit of progress in this regard from some very serious and well financed startups and young companies. All of these companies are in their own way participating in the design and delivery of a next generation network and storage infrastructure that is built from the ground up to assume that the workloads running within the infrastructure are virtualized. This is in direct contrast to what is the practice today which is to run virtualized workloads on hardware that was for the most part not designed for virtualization. To be perfectly clear today’s storage, storage network, LAN, and server infrastructure were not designed for virtualization as the use case. The vendors profiled below are starting to change this.
Xsigo is a network virtualization company that allows you to put just one 20GBS or 40GBS Infiniband or one 1GBS or 10GBS high speed Ethernet port in each server. That is then cabled to the Xsigo I/O director where all other connections to LAN’s and SAN’s are centrally configured and virtualized. This dramatically reduces cabling and configuration complexity (no more HBA’s in servers) which saves money and improves agility.
Virtensys is also an I/O virtualization company, but instead of using Xsigo’s approach puts just one PCI Express card in each server which is then cabled to their I/O virtualization appliance where the connectivity to the rest of the environment (HBA’s, LAN, etc.) is handled. Again this delivers substantial costs savings and substantial increases in configuration agility.
Big Switch Networks
Big Switch has just come out of stealth, but boasts an impressive set of founders and investors. The goals of Big Switch are to totally hide the underlying physical network from the hosts on the virtual network, to allow VMware and server administrators to manage their supporting virtual networks, and to allow the team that owns and runs the LAN to add to it or to reconfigure it without impacting the layer of VMware hosts above the network. If this works it will do for the management of LAN’s what VMware has done for the management of servers.
Tintri has taken a clean slate approach to the question of how storage should be designed and managed in support of VMware. Tintri VMstore™ is VM-aware and eliminates complexity by relating the storage infrastructure directly to the VMs and business applications. There are no LUNs or volumes, tiers, RAID groups or other traditional storage objects that create complexity in a virtualized environment. VMStore also includes built in management of infrastructure latency, and the ability for administrators to pin a workload to flash storage in performance critical situations.
V3 has taken a clean slate approach to what the back end server infrastructure should be to support VDI. This comes in the form of a series of hardware appliances that support between 50 and 400 concurrent VDI guests per appliance. The company claims that user experience with VDI will exceed that of applications installed locally on a PC due to the degree of VDI optimization that V3 has built into its system.
Virsto is a software solution that plugs into Windows Hyper-V and that addressed the “I/O Blender”. The I/O Bender comes into being when the one-to-one mapping of servers to storage is replaced by the many-to-one mapping of VM’s to storage. This many-to-one mapping has the effect of causing what used to be separate streams of sequential I/O becoming one stream of random I/O. Virtso effectively turns this commingled stream back into separate streams of sequential I/O – which has effect of dramatically boosting storage performance and VM density.
Taken together, these innovations likely mean that we are just at a front end of a wave of innovation that will be driven by virtualization and the cloud. If you look at what is run in most production VMware environments today, the only really new things in the environment is VMware vSphere, and possibly some new monitoring, security and backup tools. What this means is that we have barely started to reinvent everything that needs to be reinvented in order to properly take virtualization, IT as a Service, and public clouds to their logical and most beneficial conclusions.