The Virtualization Practice

Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, ...
VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

Virtualize Java without an Operating System

When we put a .NET application on Windows on Hyper-V (or a Java application on Linux on ESXi) we are actually virtualizing twice. Can we virtualize only once, by putting the CLR or the JVM directly on the VM Host? In this action of course we remove the operating system. Oracle is taking the lead in this area with JRockit VE JVM. There is no VMware support, the only hypervisor it supports is Xen, or more precisely Oracle VM. it only comes bundled only with an Application Server, namely Oracle WebLogic Suite Virtualization Option. The entire stack inside the virtual machine is in “User Mode” in other words the JVM and the drivers are all in the same memory address space and you don’t need to switch contexts into Kernel Mode in order to perform I/O or network access. Does VMware have a strategic initiative (or even a skunkworks) to engineer a similar bundle for its SpringSource runtimes? Or are they just concentrating on scaling out with as per the Google announcement?

CA Technologies (CAT) has announced three new virtualization management and performance offerings. This is the first example of a “big 4″ enterprise systems management vendor getting serious about providing virtualization and cloud focused solutions. This will be very reasuring for CAT customers and may will accelerate stalled virtualization projects.

The Red Hat 6 Beta is out, and there is no Xen in it, only KVM. It can operate as a guest in an existing Xen environment, but it cannot act as a Xen host. A few minority interests still cling to Xen, but ultimately it makes no sense for most Linux distributions to ship with Xen. Novell will stick with Xen for a while, and also Oracle, because they are no friend of Red Hat, but when the hypervisor wars become old news, they will quietly move to KVM. It’s easier. In future we fully expect to be talking about Xen/Linux in the past tense.

Virtualizing Tier 1 business critical applications is a challenge for many enterprises due to the resistance to the concept on the part of applications owners and their constituents. Service Assurance for these applications is required in order for their owners and their users to go along with virtualization. Service Assurance requires the integration of Applications Performance Management, Configuration Management and a new category of solutions like VMTurbo to dynamically allocate resources based upon their highest and best use.

Just like a Telco, the ‘last mile’ of Virtualization is often the most difficult, I would say even more difficult than the initial phase of virtualization. What do I mean by the ‘last mile’?

The 5-10% of systems that you have LEFT to virtualize.

These systems are your most highly used, too X to virtualize, the most complex to migrate, dependent upon specific hardware, or travel around the world (such as laptops and other hand held devices). These issues are also highly political as well.

VMware will offer enterprises a very inexpensive Java run time platform that is tuned to virtualization, that facilitates very rapid deployment of Java applications into a virtualized environment, and that offers applications portability between VMware TC-Server run time environments and Java PAAS clouds like VMforce. This may well prove to be an irresistible combination to enterprises that are used to paying millions of dollars to Oracle and IBM for an equivalent platform that is harder to manage.

With virtualization technology we, the system administrators, have a lot of tools available to make our day to day operation and administration of our environments easier to work with and speeds up the time it takes to do a lot of administration tasks. Take for example the ability we have to add resources to a virtual machine. You can add processors, memory and or increase disk space within a matter of minutes and very little downtime. On a physical host you would need to purchase the hardware first and wait for it to arrive and then schedule the downtime to add the resources to the machine. This speed and power can be both a blessing and a curse. Once application owners understand how easy it is to add resources to the virtual machines then comes the requests for additional resources any time the application owners think there is the slightest bit of need for any additional resources.

Many of us have to demo virtualization technologies to our customers and colleagues, run classes, develop code for virtualization, or just play around. For many of these cases, a cloud based virtual environment may be fine. However, what do you do when the network connection to the cloud is flaky at best? You have to rely upon your local system to do the job for you. Some solve this problem by having a ready slide deck, others solve this problem by using a fairly high end laptop, and still others tether their laptops to their phones and other cell phone cards. Which method is best?

I have always found local access to my laptop has always been the better way to run demos, classes, and presentations for my customers, colleagues, and friends. As I write software for and books about virtualized environments I almost always need access to various virtualization systems. Where I can I use network connections as going back to the office lab is in most cases much faster than local, but when I have to run things local due to telecommunication issues a high end laptop is a requirement. But which one?