Maintaining Pool Performance


Pooling and sharing of resources is a feature of many data center technologies on which we rely. But this approach has a challenge in that the pool has a finite size. If there is not enough resource to satisfy all the resource demands, then something will suffer. We frequently see this in new virtualization deployments. A cluster is built, and VMs are deployed. Over time, more and more VMs are deployed until the cluster becomes overloaded. The same overloading can happen on the network and storage resources, leading to performance issues. To avoid performance problems, we need to manage resources to make sure we satisfy demands. Ultimately we need to make sure we deliver resources where they provide a benefit to the business.

It is important to understand that there are multiple approaches to managing these resources, and that the approach chosen is always a compromise. In many ways, the resource type doesn’t particularly matter: the principles are the same. There are three basic approaches:

  1. Buy too much, pool, and share it all.
  2. Don’t share so much. Have multiple small pools.
  3. Implement some sort of quality-of-service policy.

Most of the time, you will need to use a combination of all three, but the degree to which you use each will vary. The aim is to make sure that application performance never suffers due to a lack of resource. Every application should be able to deliver its value to the business, whenever the business needs.

Buy Too Much, Pool, and Share It All

Share everything, guarantee everything. This is how we get into financial trouble. Gather together all the resources into a single massive pool. Then, let every application compete for that huge pool with no controls. To avoid performance problems, buy more resources than you will ever need. In fact, buy enough resources to accommodate the sum of the peak loads on every application. That way, even if the peak loads on all the applications happen to occur at once, you still have enough resources. What you will really see is that the huge pool of resources is hugely underutilized. There is lots of headroom for peaks, but the peaks do not coincide. The problem is that all that headroom costs money, but doesn’t deliver value.

Small Pools

The smallest resource pools occur when we dedicate physical resources to single applications. You might remember the pre-virtualization days when each application had its own server or servers. There was no sharing, so an application’s performance was limited only by the hardware it had. One way to guarantee resources to specific applications is to have a pool of resource just for that application—maybe a storage array and virtualization cluster just for an ERP application. It helps that this lines up with the way funding works: the ERP budget buys the ERP cluster. Over time, this approach will build a series of islands of resources. There will be another cluster for VDI, another one for the DMZ, and yet another for the general purpose server population. Each island must have its own headroom for the demands of its applications. Also, each island must be managed separately. We may buy less hardware, but the cost to manage can be higher.

Implement Quality-of-Service Policy

Large pools are easier to manage and provide a greater aggregate resource. To keep the costs contained, we can overcommit a pool. That is, we don’t have enough resource to satisfy the peak demand of every application. Then we need additional control. We need to be able to control which application wins when there is a shortage of resource, and which application loses. We need to set policies for resource delivery that reflect the business value of the applications. This gets hard when we do not know the business values and particularly the relative values of each application. The multiple dimensions of resources also complicate the matter. Some applications may tolerate being starved of CPU better than others. The same is true for network and storage performance. Again, we get greater management complexity to avoid hardware costs.

Resource management is a fact of life for any IT infrastructure. There is always a compromise and a set of different ways to control resources. The single most important thing is to aim for is resource delivery that delivers value to the business. The next most important thing is to try to keep resource management simple. You will need to use a combination different approaches in most organizations. I really like larger pools and resource policy. The pools are hardware defined, so they are slower to change and adapt. The policy is software defined, so it is faster to change and can be controlled programmatically.

Share this Article:

The following two tabs change content below.
Alastair Cooke
Alastair Cooke is an independent analyst and consultant working with virtualization and datacenter technologies. Alastair spent eight years delivering training for HP and VMware as well as providing implementation services for their technologies. Alastair is able to create a storied communication that helps partners and customers understand complex technologies. Alastair is known in the VMware community for contributions to the vBrownBag podcast and for the AutoLab, which automates the deployment of a nested vSphere training lab.
Alastair Cooke

Latest posts by Alastair Cooke (see all)

Related Posts:

Leave a Reply

Be the First to Comment!