The Virtualization Practice

Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, ...
VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

While we may well be on the road towards VMware becoming the layer of software that talks to the hardware in the data center – removing Microsoft from that role, this is not the end of Windows. If Windows were just an OS, it would be severely threatened VMware insertion into the data center stack. But Windows is not just an OS. Windows is also a market leading applications platform with .NET have a far greater market share and base of developers than vFabric. Windows is also in the process of becoming a PaaS cloud – one that will be living at Microsoft, at thousands of hosting providers, and at probably every enterprise that is a significant Microsoft customer. This incarnation of Windows is at the beginning of its life, not the end.

The next true IT industry revolutionary product will be software, virtualization and cloud technology that does not require underlying physical hardware resources (servers, network and disk storage). While we wait for that revolutionary technology to appear outside of marketing or computer generated animations, there remains the need to protect cloud and virtual environments and their underling disk storage. Underlying disk storage includes among others solid state device (SSD) as well hard disk drive (HDD) and Removable Hard Disk Drive (RHDD) packaged in different types of solutions accessed via shared SAS, iSCSI, FC, FCoE or NAS.

Distributed Virtual Switch Failures: Failing-Safe

In my virtual environment recently, I experienced two major failures. The first was with VMware vNetwork Distributed Switch and the second was related to the use of a VMware vShield. Both led to catastrophic failures, that could have easily been avoided if these two subsystems failed-safe instead of failing-closed. VMware vSphere is all about availability, but when critical systems fail like these, not even VMware HA can assist in recovery. You have to fix the problems yourself and usually by hand. Now after, the problem has been solved, and should not recur again, I began to wonder how I missed this and this led me to the total lack of information on how these subsystems actually work. So without further todo, here is how they work and what I consider to be the definition for fail-safe.

Todd Nielsen has already succeeded twice at what he is now being asked to do at VMware – once at Microsoft and once at BEA. This time what hangs in the wind is VMware’s ultimate destiny. Will VMware be the device driver to the dynamic data center (vSphere), or will VMware be that and the next generation application platform for IT as a Service and Public Cloud based applications?

Given the VNXe’s expandability to include fibre channel cards in the future. This storage looks very attractive to those SMBs who have made the investment previously to move towards fibre. Making use of your existing infrastructure whether fabric or Ethernet would lower the cost of adoption for the low-end EMC product. The VNXe’s expandability is one of those items that makes it an attractive tool for other uses. What are those other uses with respect to security, DR, BC, and disaster avoidance?

Monitoring the performance of the infrastructure, applications and services in IT as a Service environments will require that monitoring solutions become multi-tenant, can be instantiated by ITaaS management tools without any further configuration, and that they automatically “find” their back end management systems through whatever firewalls may be in place. These requirements will probably be the straw that breaks the camel’s back for the heavyweight complex legacy tools that were in place prior to to the onset of virtualization, the public cloud and now IT as a Service. ITaaS is the tipping point that should cause most enterprises ignore every monitoring tool that they have bought in the past and to start over with a clean sheet of paper.

The acquisition of Akorri by NetApp demonstrates the importance of Infrastructure Performance Management solutions as virtualization progresses into the realm of business critical applications, and as public clouds hope to do the same. However rather than signaling a “game over” this acquisition really raises both the visibility and the importance of both the problems that Akorri solved, and the true end-to-end problems that remain.

Given that vSphere provides significant benefits in terms of cost savings and business agility, those benefits are tied to and constrained by the ability of vSphere to provide backward compatibility with existing legacy enterprise systems. This backward compatibility makes it impossible for vSphere to provide infinite horizontal scalability. Moving to the same architecture as the most highly scaled out public cloud vendors provides for a more radical set of benefits, but at the cost of breaking backward compatibility for many applications.