I have long had what some regard as an odd viewpoint on monitoring performance in desktop environments—which, when viewed from a traditional perspective, could be considered the case. To me, desktop monitoring covers all areas of performance monitoring, whether of physical desktops or of virtual devices delivered by way of a remoting protocol such as RDP, ICA, or PCoIP. It should be known by now that my personal view is that the only true metric is that of user perception. However, we all know this is a very difficult metric to measure, what with EUC performance being like beauty: existing in the eye of the beholder.
In Do Users Have a Negative Perception of Desktop Virtualization?, James Rankin brought up a set of issues that arise whenever a new platform is deployed in an organization. Those issues revolve around the fact that users tend to then blame all problems with user experience upon the new platform, even if those problems had existed prior to the deployment of the new platform. In the case of a Citrix or VMware VDI deployment, this takes the form of “Citrix is slow” or “View is slow.”
When the discussion turns to monitoring, that discussion most often focuses upon either monitoring the infrastructure for performance and availability, or monitoring the performance and availability of certain key applications. While it is essential to monitor both the infrastructure (hardware and software) that supports the key applications in your environment, and the applications themselves, it is also critical not to overlook the single most important person in the environment – the actual end user of the application or IT service.
Both VMware (View 4) and Citrix (XenDesktop 4) are increasing the marketing and sales pushes for their hosted virtual desktop offerings. Hosted Virtual Desktop is how we refer to idea that users use a thin client (in hardware or software) to connect via a connection broker and a remote access protocol (VMware PCoIP, Microsoft RDP, or Citrix HDX) to their operating system, applications and data which are running as a guest on a host with a hypervisor (VMware ESX, VMware ESXi or Citrix XenServer).
2010 will be the year that many enterprises confront two very important changes to how they will use server virtualization. The first change is that as VMware vSphere has proven its maturity, performance and scalability enterprises will increasingly put business critical tier applications, at least in part on virtualized platforms. The second change is that at the same time, these very same enterprises will start to evaluate virtualization platforms from other vendors, in particular Hyper-V from Microsoft.
Performance Management for virtualized systems (a topic covered in great detail in the white paper referenced at the end of this article) is very different than performance management for physical system for the following reasons:
Virtualized systems are based upon putting groups of servers into pools (resource pools) which has the effect of creating shared pools of CPU and memory. This makes measurement of how much of the CPU and or memory that is available on an individual server and/or that is used by an application is therefore much less relevant than was the case on physical systems.
Virtualized systems are highly dynamic, with workloads moving automatically based upon demand and supply between physical servers. This makes discovery of where applications are running and how they are interacting with each other critical to the understanding of what is slow and why.