The Virtualization Practice

Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, ...
VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

The Red Hat 6 Beta is out, and there is no Xen in it, only KVM. It can operate as a guest in an existing Xen environment, but it cannot act as a Xen host. A few minority interests still cling to Xen, but ultimately it makes no sense for most Linux distributions to ship with Xen. Novell will stick with Xen for a while, and also Oracle, because they are no friend of Red Hat, but when the hypervisor wars become old news, they will quietly move to KVM. It’s easier. In future we fully expect to be talking about Xen/Linux in the past tense.

Virtualizing Tier 1 business critical applications is a challenge for many enterprises due to the resistance to the concept on the part of applications owners and their constituents. Service Assurance for these applications is required in order for their owners and their users to go along with virtualization. Service Assurance requires the integration of Applications Performance Management, Configuration Management and a new category of solutions like VMTurbo to dynamically allocate resources based upon their highest and best use.

Just like a Telco, the ‘last mile’ of Virtualization is often the most difficult, I would say even more difficult than the initial phase of virtualization. What do I mean by the ‘last mile’?

The 5-10% of systems that you have LEFT to virtualize.

These systems are your most highly used, too X to virtualize, the most complex to migrate, dependent upon specific hardware, or travel around the world (such as laptops and other hand held devices). These issues are also highly political as well.

VMware will offer enterprises a very inexpensive Java run time platform that is tuned to virtualization, that facilitates very rapid deployment of Java applications into a virtualized environment, and that offers applications portability between VMware TC-Server run time environments and Java PAAS clouds like VMforce. This may well prove to be an irresistible combination to enterprises that are used to paying millions of dollars to Oracle and IBM for an equivalent platform that is harder to manage.

With virtualization technology we, the system administrators, have a lot of tools available to make our day to day operation and administration of our environments easier to work with and speeds up the time it takes to do a lot of administration tasks. Take for example the ability we have to add resources to a virtual machine. You can add processors, memory and or increase disk space within a matter of minutes and very little downtime. On a physical host you would need to purchase the hardware first and wait for it to arrive and then schedule the downtime to add the resources to the machine. This speed and power can be both a blessing and a curse. Once application owners understand how easy it is to add resources to the virtual machines then comes the requests for additional resources any time the application owners think there is the slightest bit of need for any additional resources.

Many of us have to demo virtualization technologies to our customers and colleagues, run classes, develop code for virtualization, or just play around. For many of these cases, a cloud based virtual environment may be fine. However, what do you do when the network connection to the cloud is flaky at best? You have to rely upon your local system to do the job for you. Some solve this problem by having a ready slide deck, others solve this problem by using a fairly high end laptop, and still others tether their laptops to their phones and other cell phone cards. Which method is best?

I have always found local access to my laptop has always been the better way to run demos, classes, and presentations for my customers, colleagues, and friends. As I write software for and books about virtualized environments I almost always need access to various virtualization systems. Where I can I use network connections as going back to the office lab is in most cases much faster than local, but when I have to run things local due to telecommunication issues a high end laptop is a requirement. But which one?

“What do you wish to monitor?”, is often my response when someone states they need to monitor the virtual environment. Monitoring however becomes much more of an issue when you enter the cloud. Some of my friends have businesses that use the cloud, specifically private IaaS clouds, but what should the cloud provider monitor and what should the tenant monitor has been a struggle and a debate when dealing with them.

Storage Networking – Time to TAP the SAN

Virtual Instruments new SANInsight TAP for the Fibre Channel SAN allows organizations to collect critical performance data on a real-time, deterministic and comprehensive basis. This will allow organizations to TAP all of their SAN ports in advance of SAN or storage array performance problems so that the foundation is in place to allow for rapid problem diagnosis and resolution.