CIOs see selecting the right technology provider for their desktop virtualization strategy as a “significant risk”, according to research firm Ovum.  Ovum found that simplifying the management of desktops to reduce costs and increasing business agility were the top two reasons for implementing desktop virtualization, however,  an often overlooked aspect is the need to shift thinking from a device-centric perspective to a user-centric one.

A user-centric view can in part be delivered by considering adjacent solutions from the likes of AppSense, RES and TriCerat. Yet, one of the things that continues to crop up when people discuss moving Desktop Virtualization (VDI) projects out of pilot and into production is how to measure and ensure that the users of applications delivered via virtualized desktops are experiencing acceptable application performance. While this is a “new” problem for those coming to centralized desktop services for the first time through Desktop Virtualization (Citrix XenDesktop, Quest vWorkspace, VMware View), it is an “old” problem for experienced presentation virtualization (Citrix XenApp, Microsoft Remote Desktop Services).

Selecting the right technology provider is quite rightly considered as “a significant risk”: a move from a distributed service to a centralized one is a fundamental change in technology and service provision. How the performance of that service is measured is a key part of understanding when that service is delivering as expected, when that service should be kept in-house or moved externally.

The first question that we have to ask is, “Is the old problem of measuring applications performance and end user experience for Citrix XenApp (MetaFrame/Presentation Server), and Microsoft Remote Desktop Services/Terminal Services related to the new problem of measuring applications performance and end user experience for desktop virtualization solutions from vendors such as Citrix, Ericom, Quest and VMware”?

While Desktop Virtualization and Presentation Virtualization  have many things that are different between them, they share the following core elements:

  • The user’s application(s) no longer execute on their local desktop workstation or laptop. Rather they run either on a shared server (in the case of Presentation Virtualization), or in a dedicated copy of a a desktop operating system (in the case of VDI) running in the back end data center
  • The user interface of the application is not presented locally to the end users’ device by the application. The application presents the user interface to what it thinks is the device driver for the end user’s display. This device driver is in fact running not on the end users’ device, but rather on either a back end server shared by many users, or on a back end instance of an operating system dedicated to a user.
  • The actual user interface to the application is remoted from the back end server or desktop OS through a network to the end user. This is done via a protocol that captures what is moving around on the screen of the end user, and that sends just the changes in the screen over the wire to the end user (this is a dramatic oversimplification of what these remote protocols do, but as a concept it is correct).
  • The presence of the network in between the users and their applications is the key technical challenge which creates many of the applications performance and user experience challenges with these approaches. As opposed to a simple application like Microsoft Word presenting its user interface to a local display driver on a workstation, which from there puts the user interface on a monitor, a TCP/IP network is inserted in between the user interface of the application and the actual end user device used by the end user.
  • The protocols that are used to remote the user interface to the end user across Presentation Virtualization and Desktop Virtualization are largely the same. Citrix ICA/HDX is used for both. Microsoft RDP is used for both. The only reason that VMware PCoIP is not used for both is that VMware does not have a Presentation Virtualization solution to offer its customers.
  • You can’t “have your cake and eat it too”. IT wants everything to be standardized (to maximize their productivity) and users want to be able to customize everything and install their own applications (to maximize their productivity). This problem relates to performance management in that an infinitely variable centralized environment is not likely to be be any easier to performance manage, than infinitely variable fat client desktops and laptops.
  • The services only work when the user is connected to the server that is serving their applications sessions out to them. Absent a reliable, low latency, and fast network connection between the centralized desktop and the applications and data and these solutions can experience performance issues.
  • It is not uncommon to have a situation where users will take any performance problem (whether real or just perceived) and blame either the Presentation Virtualization infrastructure, or the Desktop Virtualization infrastructure. This puts the IT team that engineers and supports these environments in a “guilty until proven innocent” position and creates a problem that has to be solved with an appropriate performance management solution. This can impact the business in terms of wasted effort and resource on behalf of the IT team, and lost productivity for the user community.

The bottom line is that if your user community does not need (or cannot live with) a standardized set of applications that can only be accessed when connectivity is present, Desktop Virtualization and Presentation Virtualization solutions will struggle to deliver an effective operational environment. While these solutions have some important technical differences, they largely address the same use case, use the same basic approach to running applications, and therefore represent very similar performance management challenges.

Recent Dynamics and Changes

While Microsoft Remote Desktop Services (formerly Terminal Services), and solutions built upon it (such as Citrix XenApp) have been around for some time, Desktop Virtualization (VDI) is the new kid on the block. Desktop Virtualization has introduced an important new set of dynamics and requirements into the mix:

  • With Desktop Virtualization every user gets their own virtualized Desktop OS with their own set of applications running on that OS. This is different from Presentation Virtualization where there is a unique instance of an application for each user but the underlying OS is a shared server OS, shared by all of the applications and all of the users.
  • The one-to-one mapping of users to applications and operating system instances has given rise to the concept of Desktop as a Service, implemented either by forward thinking IT departments, or by cloud providers like Desktone.
  • The benefits of centralizing servers has lead to an effort to get the same benefits from the centralization of desktops.
  • The desire to get the benefits of centralization of servers applied to the centralization of desktops has put tremendous pressure on both internal and external service providers to be able to provide a “Desktop as a Service” experience that is acceptable to end users, and that can be aggressively rolled out to capture a greater slice of the business PC market.

Environmental Complexity

As mentioned above, both Desktop Virtualization and Presentation Virtualization replace relatively simple environments (a PC with local hard disk, memory, CPU, and Operating System and applications), with a very complex shared back end infrastructure. We’ve discussed the Desktop Virtualization Iceberg before of course. Yet if we look at a service as a part of a whole there is even greater complexity, which is captured (in part) by the diagrams below:

 

A Typical Citrix XenApp Architecture

 

A Typical VMware View Architecture

 

The Desktop Virtualization and Presentation Services Performance Management Issue
For the reasons listed above,  many Desktop Virtualization projects (and some Presentation Virtualization projects as well) can become mired in a pilot because there is no good way to first measure the performance of the system in the eyes of the end users of the system, and then ensure that acceptable performance is delivered consistently and issues highlighted in a proactive fashion. Given server based computing solutions vendors, such as Citrix, have been delivering solutions for some time – surely this is an easily solved problem? Indeed, way back in 2006 Citrix acquired Reflectent and brought the EdgeSight product to market positioned specifically to address such issues.

Does Citrix EdgeSight Address These Issues or Not?

Citrix has been developing and marketing its EdgeSight product line with the intent of addressing these issues for quite some time. Ignoring the version of EdgeSight for pre-production testing and sizing, there are three editions of EdgeSight:

  • EdgeSight for Endpoints – Continuous and comprehensive visibility into application performance at the desktop, enabling IT professionals to identify issues before business is disrupted and restore service in the shortest time possible.
  • EdgeSight for XenApp – Session level application and system performance monitoring for Citrix XenApp infrastructures provides real time performance information for proactive problem solving and infrastructure optimization .
  • EdgeSight for NetScaler - Browser level web application performance monitoring, identifying performance issues for users at any location and providing critical visibility into problem origin.
Let’s drill into each of these and figure out where the value is and where the holes are:
  • EdgeSight for Endpoints relies upon a client side agent that is Windows specific. That agent does a pretty good job of understanding applications performance from a resource utilization perspective, but it does nothing to help you understand applications performance. Therefore even if your end point devices are Windows PC’s (instead of thin client terminals, smartphones, or tablets), EdgeSight for Endpoints provides you with little real-time visibility into applications response time or end user experience.
  • EdgeSight for XenApp relies upon an agent that runs on the XenApp Server(s). This agent collects a deep and comprehensive set of statistics about the resource utilization of the XenApp server. The only true “performance” statistic that EdgeSight for XenApp collects is how long it takes for logon’s to occur which is one critical Key Performance Indicator for a XenApp farm, but one that has nothing to do with an applications performance, or end user experience for XenApp Delivered applications.
  • EdgeSight for NetScaler appears to be the only part of the product line that can bring to the table true applications performance information (response times). Since the NetScaler lives on the edge of the web server farm it sees requests made to URL’s living on web servers and responses on the part of the application system to those requests. Since this is based upon HTTP instrumentation it is obviously limited to web applications. While this is a valuable capability, it is not unique at all, and similar (and better) capabilities exist in the APM solutions from BMC (Coradiant), Compuware (dynaTrace), Quest (Foglight), New Relic, AppDynamics, BlueStripe, and Extrahop, and AppFirst.
So the conclusion is that for all of the claims that Citrix makes for Edgesight “Bridging the Gap Between System Performance and User Experience”, other than for web applications (whether delivered over Presentation Virtualization or Desktop Virtualization or not), EdgeSight does little to measure and ensure either applications performance (response time) or end user experience for HDX delivered applications to users accessing their apps via XenApp or XenDesktop.

Lest this be perceived as unduly harsh upon Citrix, it must also be pointed out that none of the other vendors of a Presentation Virtualization or Desktop Virtualization application delivery platform offer a solution that helps solve this problem. This is true for Microsoft, Quest, Red Hat and VMware as well as emerging vendors like Ericom and Virtual Bridges. Solving this problem is therefore left to the vendors who specialize in addressing next generation performance management issues, not vendors of platform solutions.

Requirements for Applications Performance and End User Experience Management in Presentation Services and Desktop Virtualization Environments

So what should we look for in a solution that can address these needs:
  1. Performance metrics need to be focused on the user experience. Performance is less about the CPU/memory of the back-end desktop host – and more about the response time that applications delivered over PV or VDI infrastructures are delivering to the end users of these applications, or if one chooses to look at it from the users’ perspective, the actual experience of the end user. Note that the end user does not know nor does the end user care how much resources are consumed in between their click on the screen and the answer – the user just cares that it happen quickly and consistently.
  2. We need to recognize that it is a current feature of remote protocols (ICA, RDP, PCoIP, Spice et.al) you want to use that the request/response notion of a transaction that exists in HTTP is purposely obliterated. Remote Protocols do not send “real” data over the wire, (which can also be a benefit – it is how they meet the requirements for data security in sensitive situations like health care and financial services). So the difficulty in solving this the applications performance and end user experience problem stems from the fact that Desktop Virtualization and Presentation Virtualization solutions have been designed to solve a different problem, and have created this problem as a result.
  3. The end-to-end complexity of these environments needs to be considered. The bottom line is that when you deploy either a Presentation Virtualization or VDI solution you are replacing a very simple system (a PC with a hard disk,  a CPU and some memory) with a very complex back end infrastructure often consisting of shared storage, server farms, layers of supporting infrastructure servers, and finally a network over which it all has to be delivered to the end user.
  4. Interpretation of data to allow decisions to be made quickly is key. Commodity data will equal commodity results. What this means is that any solution that just pulls data from WMI, the Citrix API’s, SNMP, SMIS, or any other public interface is just going to regurgitate the obvious in a pretty report or dashboard that will do nothing to actually solve the problem. This is a hard problem and it will only be meaningfully attacked by vendors that go out of their way to get real response time data or latency data that is not available via some standard and relatively useless API.
Solution Evaluation

Recognizing that there is no “perfect” answer to this problem and therefore no one product that fully and completely solves the problem,  here are some next generation solutions that can help you make a good dent in the issues and can help you further the penetration of desktop virtualization and presentation virtualization solutions into your environment.

Product/Edition Supported
Platforms 
Data Collection
Method 
Application
Response
Time 
Network Latency
Citrix EdgeSight Citrix XenApp Only Agents on XenApp Servers &
Windows Client Devices

Xangati VMware View,
Citrix XenApp/XenDesktop
NetFlow data from virtual and physical switches  
ExtraHop Every Vendor with a
TCP/IP based core transport
Physical and Virtual Appliances on
Physical and Virtual Networks and
Mirror/Span Ports
 
Liquidware Labs
Stratusphere
VMware View,
Citrix XenApp/XenDesktop
Windows/Linux Agent on clients, Virtual Appliance on VMware vSwitch    
LakeSide Software VMware View,
Citrix XenApp/XenDesktop
Agents on XenApp servers and
Windows client devices
 

Summary

Ovum’s research found that desktop virtualization currently represents approximately 15% of the business PC market. However, this figure is dominated by the Presentation Virtualization model (12%), typically used in call centre-type environments, and has been for the last 10 years.

If PV/terminal services are excluded, the next generation of solutions aimed at CIOs, from the likes of Citrix, Quest and VMware, hold less then 3% of the market, showing that many CIOs are holding back from taking the plunge.

Citrix has been in the business of centralizing the execution of end user applications in back end data centers since WinFrame was launched in 1995. Two things have always held back the further penetration of the centralized approach to end user computing. The first was the lack of the ability of the user to work off line and to fully customize their environment. The second was that tools never came into being for this method of delivering functionality to end users where it was possible to measure and assure applications response time and end user experience.

Desktop Virtualization has created new hope for broadening the appeal of the centralized approach, but now falls prey to the exact issues that have always held back Server Based Computing – delivering and maintaining a ‘personal’ service has been difficult in a shared environment. Desktop Virtualization could push past 15% penetration of enterprise desktops given technologies are available to address offline working (client hypervisors such as MokaFive and VirtualComputer’s NxTop) and user driven customization issues (AppSense, RES, Scense, TriCerat) are available.

However,  unless the ability arises to assure the applications performance and end user experience to the actual end users it is unlikely desktop virtualization will hit the heady targets other analysts  predict beyond 2015. Moreover, the move from a distributed desktop model needs not only a technology shift, but a  service mindset change. Defining a strategy centered on the user is the first step many should take: this will involve understanding performance from a user’s point of view. The performance management bar is in particular being raised by the fact that tools now exist that allow a level of applications performance assurance for applications delivered to end users via every other method except presentation virtualization and desktop virtualization.

Share this Article:

Share Button
Bernd Harzog (332 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


Related Posts:

5 comments for “Performance Management for Desktop Virtualization (VDI) and Presentation Virtualization (SBC)

  1. Christoph Wegener
    August 2, 2011 at 5:24 AM

    Hi Bernd,

    I see there is a red ‘X’ for Citrix EdgeSight in the ‘Application Response Time’ column. What about the Active Application Monitoring feature of EdgeSight. Doesn’t that provide application response time values, even if the response times are measured for synthetic transactions?
    Also, EdgeSight tracks response times of real user applications running on XenApp servers and generates alerts when an application takes too long to respond. I believe the EdgeSight for Endpoints agent might have this feature as well, but I’m not 100% sure.
    Also, EdgeSight for Endpoints licenses can be used to deploy agents on your backend servers (AD DS, SQL, Exchange, etc.)

  2. Bharzog
    August 2, 2011 at 7:53 AM

    Hi Chris,

    Just the fact that it involves synthetic transactions is enough to make it a red x. I do not go into it in this article, but there are many other articles up on the site where I explain (yes it is my opinion) that synthetic transactions are a worthless way to measure applications response time in production on an ongoing basis. The short version of the story why is that with synthetic transactions, you cannot cover all of the (sometimes stupid) things that users do in an application, and that you have to put effort into measuring each application and each version of each application individually. This makes synthetic transactions useful for making sure that a small number of things work before the users show up for the day, but useless for monitor a suite of applications that are constantly changing in production.

    The take too long to respond feature is really just looking at an unresponsive process (when a Windows app becomes unresponsive you get a message in its Window and ultimately a choice as to whether to kill it or not). All EdgeSight is doing is hooking this feature of Windows, and capturing applications in this state. This has nothing to do with application response time, and is really an application availability feature.

  3. March 4, 2013 at 6:39 AM

    Hi Bernd
    I find your article very interesting and very valuable to me as Service Delivery manager.
    However I have a few questions to some of the products you mention in the article.
    But first let me try to explain what my challenge is.
    Today we run EdgeSight and must also conclude that is has not been able to measure application response times.
    Management are very interested in knowing the average end-user response times for all our users that run Citrix/Desktop on XenApp Servers.

    We currently have 120 XenApp servers and around 3000 users worldwide that do their daily business in Citrix.
    All of our applications are published through XenApp and therefore we are only able to measure the ICA latency from the XenApp server and to the user.
    But now we search for a tool that is able to measure the backend response times from the backend (server or DB) and toward XenApp servers.
    We have been looking for ExtraHop and it seems very interesseting but could you please explain to me if Xangati or Liquidware Labs Stratusphere also could do the job aswell?

  4. Bharzog
    June 8, 2014 at 11:40 AM

    Hi Shannon,

    I went and read your post on your site. I think you did a great job of laying out the challenges. My post took the position that adding and application delivery layer to the architecture created such a unique problem that a unique performance management solution was called for. Frankly I think that solutions that can measure application response time across that entire stack of virtualized architectures is essential.

Leave a Reply

Your email address will not be published. Required fields are marked *


two + = 3