Both VMware (View 4) and Citrix (XenDesktop 4) are increasing the marketing and sales pushes for their hosted virtual desktop offerings. Hosted Virtual Desktop is how we refer to idea that users use a thin client (in hardware or software) to connect via a connection broker and a remote access protocol (VMware PCoIP, Microsoft RDP, or Citrix HDX) to their operating system, applications and data which are running as a guest on a host with a hypervisor (VMware ESX, VMware ESXi or Citrix XenServer).

Hosted Virtual Desktops are similar to Presentation Virtualization (Citrix XenApp) in that the applications and data are running on centralized servers in the data center. The primary difference between Hosted Virtual Desktops and Presentation Virtualization is that with Presentation Virtualization, there is one central server OS (a copy of Windows Server) that supports N instances of M applications, whereas with HVD there is an OS and a set of applications for each user instead of just one OS for all of the users.

That difference aside, these two methods of applications delivery share a great deal of technical and business attributes. They are both primarily for users that are connected almost all of the time, and also for users who do not have a need to do a great deal of customization in terms of installing their own applications. These solutions do support some degree of customization – but they are not designed to offer the flexibility to users that a fat client PC or a laptop computing environment can offer.

Hosted Virtual Desktop environments offer the benefits of centralization, consolidation, better management of resources, better management of availability, and in some cases better performance for end users due to the power of the servers which host the end user desktop environments. However those benefits come at the price of a fairly complex environment with many moving parts, and a set of challenges. The complexity of the environment is depicted below.

There is one other important attribute that Presentation Virtualization and Hosted Virtual Desktops share. That attribute is that the act of taking a local computing environment away from a user and centralizing it creates a set of dynamics around end user experience and performance. These dynamics are:

  1. Once the change is made, users will blame any and all degradations in performance upon the HVD platform. The HVD platform and the people who designed it, administer it and support it are guilty until proven innocent of all allegations of performance issues.
  2. IT staff who have lived through the process of Citrix becoming the successful first generation application centralization technology remember that they were guilty until proven innocent the last time this was tried and do not have pleasant memories of the experience
  3. The platform vendors do not provide tools that allow their own platforms to be monitored or managed from the perspective of measuring and ensuring that end users are having an acceptable experience and helping to find out where the issue is when that experience degrades.
  4. The remote protocols that are used to deliver the application to the end user’s device obscure almost all of the information that would allow individual applications and individual transactions to be measured and diagnosed. A fair generalization is that these protocols send differences in the bitmaps of the screen over the wire. Compared to and HTTP request/response sent to a browser, or a URL response time measured at the client this bitmap information is useless for application and transaction performance management purposes.
  5. The load patterns of guests which contain end user operating systems and applications are highly variable especially compared with the load patterns for server based applications. This makes inferring end user experience from how CPU, memory and I/O resources are being used a less than productive approach in HVD environments.
  6. The highly variable and intermittent nature of the issues in HVD environments creates a need for near real-time performance data. Approaches that poll the servers (or the vCenter API’s) at 5 minute intervals will simply miss far too many of the issues to be useful.
  7. Products (even virtualization focused products) that are competent at monitoring virtualized servers do not (at least at this stage of the game) contain the specific features required to monitor HVD in production.
  8. A great many HDV pilots are ongoing, with more starting every day. However the number of project that are exiting pilot and going into production pale compare to the number that are being started and that are “stuck” in pilot for a variety of reasons. Allegations of performance issues on the part of end users are one of the principal reasons for the “stuck in pilot” issue with HVD projects.

For the above reasons, organizations that are serious about getting HVD into production are encouraged to investigate an HDV competent end user experience management solution during the pilot phase of their HVD project.  There are several solutions from performance management vendors that are specifically focused upon this problem.  Due to the very early nature of the market for monitoring HVD in production, the companies that have solutions for this problem have fairly different solutions, that take fairly different approaches to solving these problems. These solutions are profiled below:

LiquidWare Labs Stratusphere

Prior to the existence of Liquidware Labs, there was a company called vmSight. vmSight had unique technology that identified each guest and the user of that guest, and a virtual appliance that watched end user experience from the perspective of each guest (which was a virtualized desktop). vmSight landed some large accounts, but was early to what was then the VDI game since there were few production VDI installations. David Beineman, the founder of VizionCore (who sole VizionCore to Quest Software) determined that what was really needed was a way to determine (assess) which physical desktops should or should not be virtualized. Kind of VMware Capacity Planner for VDI, but tuned for the VDI (HVD) case. David and Tyler Roher bought vmSight and build a new company around it focused upon HVD assessments which they named Liquidware Labs.  All of this occurred in 2009.

Now Liquidware Labs (LWL) is in the position of having a good set of partners who use their technology to do HVD assessments. Perot Systems and over 40 other Premier VMware Partners (including VMware Services themselves) are using LWL Stratusphere to do HVD assessments at large enterprises all over the world. LWL’s strategy is to engage with the VAR and the customer at the assessment phase of the project, assist in the migration of the user from a physical to a hosted virtual desktop, and then sell a production VDI performance management solution to the customer to ensure good end user experience on an ongoing basis.

Stratusphere collects its data via two different mechanisms. An agent installed on the desktop OS collects the metrics required to asses whether or not that desktop should be virtualized. Once desktops are virutalized, comprehensive metrics about the performance of that guest are collected by a virtual appliance that sits on the virtual mirror port on the vSwitch. The Stratusphere product provides customers with two unique metrics. VDI FIT is an assessment of how appropriate it is to virtualize a particular physical desktop. VDI UX is an assessment of the User Experience of that user, whether they are on a physical fat client desktop, or an and HVD environment. Obviously comparing VDI UX for a set of users across the case when they were physical vs. after they have been virtualized is a great way to get a before and after comparison of the users’ experience. The VDI fit data comes from a virtual appliance which allows LWL to see all interactions between guests and hosts on the part of users and applications in the VM’s on each host. The company has recently added a one-click report to the product that creates a PowerPoint presentation that summarizes the findings of the assessment for the VAR/Consultant and the end customer.

The best part of Stratusphere is that it can be effectively used to establish the baseline of UX for users before they get virtualized and compare the UX once virtualized to that physical baseline. This gives enterprises with HVD projects a realistic and objective way to compare user experience on a before and after basis.

The issues with Strastusphere is that there are limitations to how close you can get to true end user experience with data collected from a mirror port on a vSwitch. Looking at TCP/IP data in an application agnostic and independent manner allows LWL to provide a UX metric for every application, but also creates a situation where that metric may, in some cases, not bear a good resemblance to actual applications response time or transaction response time.

Xangati

Xangati is a California based startup with a Netflow based appliance that is used to monitor and assess the performance of a HVD environment from the perspective of the Netflow data. This approach has several advantages. Unlike a virtual appliance on a mirror port where one is needed for each virtual host, a Netflow appliance is just another IP connected device on the subnet. You simply configure your routers and switches to point their Netflow data at the IP address of Xangati appliance. Netflow data is interesting in that it contains fairly robust discovery of the applications on the network (port and protocol), and that it is near real time and continuous. This allows the Xangati appliance to provide a very comprehensive, continuous, and near real time (Xangati calls this streaming) view of the performance of HVD system from the perspective of the network. Since the network is the medium that connects all aspects of the HVD environment to each other, it tends to be a very productive place to gather the needed data, without getting into the complexities of individually instrumenting the various elements of the HVD system.

Another unique feature of Xangati is the ability for the product to provide DVR like record and playback of activity on the network. This capability can be combined with a user initiated trouble ticket allowing the recording of the activity on the network to accompany the report of the problem to the Help Desk.

The best part of the Xangati solution is the comprehensive, near real time and continuous instrumentation of the network through Netflow, combined with the DVR-like recording and the integration of these recordings into user initiated trouble tickets. The only issue with the Xangati solution is its dependence upon Netflow as the source of data. This creates some limitations on what Xangati can see – for example if there is an issue in how two guests on one host are communicating with each other, it is not clear that these interactions will be seen by the physical switches and routers that provide the Netflow data to the Xangati appliance. Addressing these issues is on the product roadmap for the company, so they will be resolved over time.

Xangati recently held a webinar with The Virtualization Practice. A recorded version of the webinar is available on the Xangati web site.

Aternity FPI

Aternity is one of two vendors (Knoa is the other one) that takes the approach of putting an agent in on the desktop OS (whether physical or virtual) – where this agent is focused upon gathering not just asset, configuration, and resource utilization data, but far more importantly , the actual response times that users are experiencing for key transactions within their applications. Understanding what the actual end user experience of a user is on a per transaction basis within an application – measured at the pane of glass of the user has been somewhat of an unattainable holy grail until Aternity and Knoa came upon the scene. The Aternity agent performs its instrumentation via cartridges, with a cartridge available for each broad class of applications. There are cartridges for just about every class of Windows based desktop applications including HTTP applications, Rich Internet Applications (AJAX and JavaScript), COM applications (Microsoft Office), and client side Java applications. This cartridge approach covers applications from software vendors as well as custom developed applications.

There are some manual steps required in order to achieve this level of transaction response time monitoring. Upon installation the Aternity agent will see all of the transactions within each of the monitored applications. However it will not know which transactions are of interest. Configuring transactions of interest consists of taking a capture of the process the user is going through when they execute the transaction from beginning to end, and then playing that capture back in a tool that shows the events captured by the Aterntiy agent wile the transaction was being executed by the user. This is a process that Aternity trains the customer to do for themselves so that once the first few transactions are defined the customer can add to the monitored set at their own pace.

The best part of the Aternity solution is that it gives you the data that really tells you exactly what the end user is experiencing on the screen of their workstation, along with all of the other data about what occurring on that workstation at the time of a response time degradation. The Aternity and Knoa products are the only two solutions on the market that provide you this precise and highly desirable data.

The issue with Aternity is that it is all about the end user, and not the back end infrastructure that supports a HVD environment. This means that Aternity itself is blind to what is happening back in the HVD infrastructure. Aternity’s product is open and its data is available for integration with back end monitoring systems. The company has already done integrations with vendors like CompuWare and CA which provide Aternity’s end user perspective into a comprehensive back end management system.

Knoa EPM and GEM

Knoa is like Aternity in that the Knoa solutions are based upon an agent that is installed on the workstation of the end user. There are two Knoa products. The Knoa Experience and Performance Manager product has been shipping for several years. This product collects the most comprehensive end user experience data for a selected set of business critical applications. Most of the important vendor applications are covered including SAP, Microsoft Dynamics, Microsoft Outlook, Microsoft SharePoint, Oracle E-Business Suite, Oracle Siebel CRM, and PeopleSoft. EPM collects transaction response time data for every transaction in these applications. EPM also collects how the users are using the application (their workflow within and between the applications) as well as all environmental and error data. In summary the EPM product provides hands down the most comprehensive and complete picture of everything that is occurring within one of the applications supported by the EPM product. The instrumentation of SAP R3 is as a matter of fact so deep that SAP resells the Knoa solution to its customers to allow them to improve the effectiveness with which they use SAP.

In August of 2009, Knoa announced its Global End-User Monitoring Solution (GEM). GEM will enable organizations to monitor end-user experience and interaction for all desktop and web-based applications running on users’ desktops without application specific profiles. Whereas the EPM product is known for deep transaction coverage of a selected set of applications, GEM will be focused upon providing a realistic and accurate response time number for every application on the user’s desktop, automatically with no configuration required.  Once GEM is delivered with these global response time metrics across both web and Win32 applications, Knoa will be in the unique position of being able to offer customers a choice of in depth transaction monitoring for a few applications (EPM), and/or less detailed but fully automatic monitoring of all applications (GEM).

The best thing about Knoa is that if you have one of the applications supported by EPM, there is no other solution on the market that provides the level of instrumentation for that application out of the box from a vendor. Knoa has done all of the work to instrument every transaction for the applications supported by the EPM product. The data provided by the EPM product is useful for a variety of tasks beyond response time management including user training, fostering greater application adoption, and fostering greater user productivity. Once GEM is fully available with web and Win32 response time monitoring for every application out of the box, IT organizations will for the first time have a tool available that will provide true application response time data for every user application out of the box.

The issue with the Knoa solutions are the same as those with Aternity. Knoa’s solutions do a great job of monitoring the “user tier” of an applications system and are like Aternity blind to the back end components that make up the system. However, like Aternity, Knoa’s data is available via web services interfaces to for integration with back end monitoring solutions.

Lakeside Software SysTrack VMP

Lakeside Software is a vendor with a long history and deep experience in the monitoring of centralized application delivery solutions. Back in the early days of Citrix, Lakeside was the first vendor with a comprehensive approach to resource monitoring for Citrix MetaFrame  (as XenApp was known back then). Citrix even licensed the core technology for inclusion in its products which became the RMS feature in MetaFrame.

Lakeside offers several products. Its Systrack Suite has been used for years to monitor resource utilization, event logs, and services, to track inventory and assets, to do capacity planning, to track and manage licenses, and even do resource utilization based chargeback for many different types of Windows servers. The desktop virtualization focused product is called Systrack Virtual Machine Planner (VMP) and this solution leverages much of the rich base of technology that Lakeside has built over time. Systrack VMP is, like Liquidware Labs Stratusphere a solution designed to allow VARs and end user customers to assess their physical desktops and determine from this assessment which desktops should and should not be moved into a hosted virtual desktop environment.

Systrack VMP uses the same Windows based agent that the rest of the Systrack products are based off of. This agent collects an extremely extensive set of data about the workload on each workstation including what applications are run, and how those applications work. For example applications that make intensive use of high motion and high resolution graphics are automatically identified as these can be problematic in HVD environments. The agent is a part of a distributed data collection network that scales up to the largest desktop installations.

All of this data is collected and processed, and is then distilled down to a summary report that identifies which workstations should and should not be moved into a HVD environment, as well as what desktops should be placed together on the same host (because of their load profile, and because of common applications would benefit from VMware Page Sharing).

Since the data collection agent for Systrack VMP is same as the license for the Systrack Suite (the production monitoring solution from Lakeside), Lakeside offers a smooth transition from planning the migration to HVD to the monitoring of the production HVD environment.

The best thing about Systrack VMP is that it is based off of the extremely mature and extremely comprehensive data collection agent that is the basis of the venerable Systrack Suite. Therefore the same product family spans both the assessment and the production monitoring aspects of an HVD project.

The issue with Systrack is rooted in its strength. Data collection for Systrack is done by the Systrack agent. This means that the agent must be installed on every physical desktop, and in every virtual desktop. This also means that if the information is not accessible via an agent on the desktop, then Systrack is not going to be able to access it.

Summary

Hosted Virtual Desktop (aka VDI) environments are sufficiently complex and different from either physical desktops or virtualized servers to warrant a dedicated approach to planning and assessing the migration from physical desktops to virtual desktops, and a dedicated approach to monitoring the resulting HDV environment in production.

Share this Article:

Share Button
Bernd Harzog (335 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


Related Posts:

1 comment for “Virtualized Desktops and End User Experience Management

Leave a Reply

Your email address will not be published. Required fields are marked *


five − 5 =