We all know that Windows 2008 R2 Hyper-V is around the corner and we are well aware of the majority of new features, but do they make it ready for the premier division.  Ready to run head to head with the big boy: VMware vSphere?

R2 is almost out of the door and it is a much more rounded product,  new features like Linux support and “Live Migration” have most definitely moved it up the food chain.  However, is it ready to take on VMware?  Well the answer is yes and no.

The most anticipated new feature is of course Live Migration; the moving running VM’s from one host to another without interruption of services running inside the VM’s.  To accomplish this technique Microsoft has created a new shared filesytem which is also introduced in Hyper-V 2.0 this filesystem is called Clustered Shared Volumes.  Now Live migration works better with System Center Virtual Machine Manager 2008.  With it you can provide additional Live Migration management and orchestration scenarios such as Live Migration via policy, these extensions are effectively Microsoft’s HA and DRS.

However this is not the be all and end all,  Microsoft are also introducing:

  • Support for 32 logical processors on host computer; this is twice the initial supported number of logical processors of Windows Server 2008 Hyper-V
  • The ability to add and remove VHD and pass-through disks to a running VM without requiring a reboot, that said this is Guest OS dependant and limited to the SCSI controller not the IDE one.
  • Support for Second Level Translation (SLAT), will introduce support for some of the newer processor Virtualization features such as AMD’s EPT and Intel’s NTP;  the addition of these feature will decrease the hypervisor overhead, thereby enabling more memory to be available to the child partition holding Guests.
  • And the ability to boot a guest from VHD,
  • And possibly the ability to Host Add Memory, although I doubt that this will make the cut.

So where on a comparison chart does this bring Hyper-V.

vSphere_Hyper-V_SME_Comparison

So what we are talking about in reality is ESX about 4 to 5 years ago (roughly).

So yes because, Hyper-V is now fit to pluck the low hanging fruit that fed the early adoption of virtualization (driven by the economics of server consolidation) in the enterprise and also feed the uptake in smaller companies that are Microsoft focused due to the similar look and feel of a windows based technology.  This will significantly increase market share for Microsoft’s product.

Now you will have notice that  I said roughly, this is because, as the chart shows, Hyper-V still does not support memory over commit or page sharing.

So why is Memory over commit so important, as Microsoft obviously thinks it is not,  however let me paint a scenario for you.

Microsoft states that Memory Over-commit is not required if you correctly size your guests memory requirement.  OK so that is the starting point.

I have 3 Hyper-V R2 hosts each with 32GB of physical memory,  each host runs 15 guests with 2GB of memory. (I am not taking any account here of Hypervisor overhead).  Therefore a total memory commitment of 30GB.  It is in Microsoft parlance a well sized Host.

Now today is patch Tuesday and a critical Hyper-V patch has been released that needs to be applied immediately , great I’ve got “Live Migration”  I will move my Guests off each host in succession on to my remaining two hosts.  Now that’s a plan.  So I initialize the migration and it fails,  why because I have memory space on my remaining 2 hosts to support any more running Guests.  This now means that I have to purchase more hardware to utilize “Live Migration” with the same number of Guest OSes.  Let’s do the math:

45 Guests with 2 GB for memory = 90GB, or 3 fully utilized Host servers.

To successfully clear a single host I will need 16GB of free memory in a 3 host scenario or 10GB in a 4 host scenario per host.

Therefore 4 hosts equals 4 x 32 GB or 128GB total memory accross the Cluster,  this equates to approx 12 Guests per guest or 24GB of Real Memory consumed, now this only leaves 8GB memory overhead Which is still not enough. Which in effect means that you will require 5 hosts at 32 GB to be able to evacuate a single Host for maintenance, resulting in a significant increase in hardware, licenses, power consumption, etc.  Not good for the business case.

This is the reason that Memory over commit is important.  Perhaps Microsoft could investigate an alliance with RTO Software, whose TScale product was able to implement page sharing on Citrix Servers – allowing much greater density of concurrent users on these servers. The problems seem similar enough to at least warrant an investigation on Microsoft’s part.

So Hyper-V R2 is a vast improvement on the first born but still no cigar in the Enterprise.

Share this Article:

Share Button
Tom Howarth (61 Posts)

Tom Howarth is an IT Veteran of over 20 years experience and is the owner of PlanetVM.Net Ltd, Tom is a moderator of the VMware Communities forum. He is a contributing author on VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment, and the forthcoming vSphere a Quick Guide. He regularly does huge virtualization projects for enterprises in the U.K. and elsewhere in EMEA. Tom was Elected vExpert for 2009 and each subsequent year thereafter.

Connect with Tom Howarth:


Related Posts:

3 comments for “Hyper-V: Is R2 ready for primetime?

  1. September 14, 2009 at 11:11 AM

    Does that mean that if MS introduced “Memory over-commit” then it’d be on par with vSphere?

    I know XenServer comes in for a fair bit of flak – especially for VDI deployments – because it lacks this feature; I’ve heard strong rumours than this won’t be such an issue in the future.

    But, there’s an argument against the feature in that it allows for careless design. To take your example – the environment is running 45 hosts. If the design calls for the facility to enable live migration, you’d look to build in n+1 hosts: otherwise in the event of a failure you’ve got no spare capacity.

    The argument could be that allowing overcommit means you don’t as much hardware. Yet if each host has been sized for 2GB, and you’re running with less than that what’s the impact on performance if you do indeed overcommit the memory. Would you effectively be running the servers so poorly that they’d be unusable?

    Having memory overcommit can be a useful feature; especially in development and testing. But, to quote a recent article, virutalization is a lie (http://blogs.sepago.de/helge/2009/09/06/virtualization-nothing-but-lies/), and as such memory over commit is a down right, bare faced cheek of a lie.

    So if primetime is when you really need to be getting it right; thinking about a design and the impact of each of the components – is memory over-commit the feature that makes the difference?

  2. Guillermo
    September 21, 2009 at 11:06 PM

    I concur that memory overcommit is a very important part of the equation, but let’s not forget the file system. NTFS is not a cluster file system period. The ability to implement several VM’s in the same volume and be able to access them from different physical hosts as you can do with VMWare is huge. It saves a lot of time and money. I have been setting up POC’s for R2 for a month now and I still find it cumbersome to setup and is mainly because of the NTFS and MSCS.

    MS needs to get a cluster file system pronto! I belieive they have two small companies on their sights. Hopefully MSwill give us a nice surprise before years end and maybe an R3 for Hyperv in a couple of moths after that..

    The current solution in R2 is complicated and weak, I have to admit is getting better, but it’s not there yet.

Leave a Reply

Your email address will not be published. Required fields are marked *


7 − one =