I have been a big fan of VMware’s Distributed Resource Scheduler (DRS). VMware DRS is a service or feature that will dynamically allocate and balance computing resources across the hosts in a cluster. In all of the environments I have work with so far, DRS has been a fantastic tool for getting and maintaining that balance across all the hosts in a cluster. Recently though I have come across a limitation of VMware’s DRS that is worth mentioning.

I have been working with virtualization since long before the introduction of multi-core processors.  In the beginning when multiprocessor virtual machines were in its infancy we were very careful and very selective about the number of these multiprocessor virtual machines that we had running in our environment.

Fast forward to today and we have the technology available to us to be able to deploy hosts with six cores per processor.  That gives us a lot of processing power to take advantage of. Along with this great power also comes great responsibility.  In one specific instance, the infrastructure that I was working with had ageing hosts that were slated for server refresh but the client had big plans for high powered virtual machines in the mean time.  When we originally deployed the cluster we were deploying single processor virtual machines and had plenty of horse power available.  The client then switched gears and we started deploying two processor virtual machines and the cluster continued to perform well and the overall CPU performance of the host and the virtual machines continued to perform well.

The cluster in question was an eight node cluster with each node having four processors with four cores each and 128 GB of ram.  With the continued great success and stability of the cluster the client moved on to adding four processor virtual machines and although the CPU and memory of the hosts appeared to be in great shape I was starting to get calls about the performance of the newer four processor virtual machines that we had deployed.

The limitation of DRS that I had mentioned at the start of the post had really come into view. DRS was doing its job and the hosts were equal across the cluster in both CPU and memory but the problem was that DRS had loaded a host with too many of the four processor virtual machines and when examining the results from esxtop I could see the %RDY of the four processor virtual machines were well over 100 and in some cases topped over 200.

Based on the results we were seeing, the question we then asked ourselves was, is the problem with the virtual machine or were we oversubscribed?  Now this is where things got really interesting and we could not believe the results we were getting in our next test.  We changed the VMware DRS setting to manual so we could control the placement of the virtual machines on the hosts in the cluster and monitor the %RDY as we did so.  Our first test was to even out the load of the four processor virtual machines as equally as we could across all the hosts in the cluster.  We had more dual processor virtual machines then we had single processor and tried different combinations of spreading the load.  We were still not getting the results we were expecting.

In an attempt to try something different we placed all the four processor virtual machines on a single host and cleared the rest of the virtual machines off so it was just the four processor virtual machines. Our thought process for this was that the CPU scheduler needed to have four processors available at a time for each of the four processor virtual machines and by grouping the four processor virtual machines together the CPU scheduler would have an easier time scheduling the CPU cycles to the virtual machines.  This actually paid off and the %RDY times were much better and the performance of the four processor machines increased dramatically.

DRS works well but it does have its limitations since it only considers the overall CPU and memory percent when deciding where to place the virtual machines across the hosts in the cluster. In a previous post, A Look at VMTurbo Monitoring, I started to examine what VMTurbo has to offer and bring to the table and as far as I know it is the only third party product that truly expands on VMware’s DRS to take in account other factors like %RDY when the VMTurbo product decides where to place the virtual machines and when working with a spread of many different flavors of virtual machines of all shapes and sizes.

I think as we move forward and the technology continues to get better and better at such an incredible pace, a pace that budgets of companies will in many case never be able to keep up with, we must continue to evaluate the true capabilities of the environments, that we support, and even when we are forced to slow down or stop the pace and scope of the projects that we are asked to deploy. When that is not an option we have to be willing to examine and push for the proper third party tools that we will need to help maintain the balance and performance of the infrastructure.

Share this Article:

Share Button
Steve Beaver (158 Posts)

Stephen Beaver is the co-author of VMware ESX Essentials in the Virtual Data Center and Scripting VMware Power Tools: Automating Virtual Infrastructure Administration as well as being contributing author of Mastering VMware vSphere 4 and How to Cheat at Configuring VMware ESX Server. Stephen is an IT Veteran with over 15 years experience in the industry. Stephen is a moderator on the VMware Communities Forum and was elected vExpert for 2009 and 2010. Stephen can also be seen regularly presenting on different topics at national and international virtualization conferences.

Connect with Steve Beaver:


Related Posts:

10 comments for “Exploring a Limitation of VMware DRS

  1. February 18, 2011 at 7:02 PM

    Hi Stephen,

    Thanks for the deep rendering of this important issue. Indeed, DRS is a key piece of the Vmware strategy, yet there are some very critical deficiencies which may significantly reduce virtualization ROI and impact application performance.

    The components like DRS are needed badly. But they cannot be confined to their small cluster quarters, one cannot look at one or two resources at a time; an innocent vMotion to avoid high memory utilization can cause CPU Ready queue congestion or high latency on the network card in another host. Once you start looking at more resources, larger clusters and heterogeneous components (as oppose to standard uniform building blocks), our today’s reality, the traditional resource scheduling algorithms would need enormous amount of time and resources to come up with accurate suggestions. Exactly at the moment when one needs to react in real time in dynamic demand fluctuation in a large heterogeneous shared infrastructure. Across stacks, clusters, data centers and clouds.

  2. sbeaver
    February 21, 2011 at 12:01 PM

    This continues to show the limitations in place and something that needs to be considered and improved moving forward

  3. March 12, 2011 at 5:01 AM

    Hi,

    Interesting.
    If I understand correctly, the problem was not “DRS had loaded a host with too many of the 4 vCPU VM”, but DRS loaded this host with these 4 vCPU VM and then add some smaller VM on top?

    I agree that DRS should take Ready Time into account.

    Thanks
    e1, VCAP-DCD

  4. sbeaver
    March 12, 2011 at 9:08 AM

    e1,

    Yes that would be correct with the point being that other factors really have to taken in account for this to really work better. We will see what VMware can add or what 3rd Party vendors really run with this.

  5. Brian Finnegan
    August 8, 2011 at 7:12 AM

    Hi Steve,

    I’ve observed the same with ESX 3.5 hosts.

    What version(s) of ESX(i) have you observed this on?

    Brian

  6. sbeaver
    August 8, 2011 at 9:04 AM

    I have observered this with ESX3 and ESX4 In the ESX5 release this is the first version to address storage I/O into DRS.

    Steve

  7. Brian Finnegan
    August 8, 2011 at 2:05 PM

    DRS certainly appears to be more concerned with ensuring host CPUs are not idle than ensuring VM ready times are low.

    It’ll be interesting to see if Storage DRS operates in a similar fashion e.g. ensuring host I/O loads are not idle at the expense of VM disk command latency. Perhaps VMs with multiple VMDKs will suffer more than those with a single VMDK.

    B

Leave a Reply

Your email address will not be published. Required fields are marked *


− 6 = three