CIOs see selecting the right technology provider for their desktop virtualization strategy as a “significant risk”, according to research firm Ovum. Ovum found that simplifying the management of desktops to reduce costs and increasing business agility were the top two reasons for implementing desktop virtualization, however, an often overlooked aspect is the need to shift thinking from a device-centric perspective to a user-centric one.
A user-centric view can in part be delivered by considering adjacent solutions from the likes of AppSense, RES and TriCerat. Yet, one of the things that continues to crop up when people discuss moving Desktop Virtualization (VDI) projects out of pilot and into production is how to measure and ensure that the users of applications delivered via virtualized desktops are experiencing acceptable application performance. While this is a “new” problem for those coming to centralized desktop services for the first time through Desktop Virtualization (Citrix XenDesktop, Quest vWorkspace, VMware View), it is an “old” problem for experienced presentation virtualization (Citrix XenApp, Microsoft Remote Desktop Services).
Selecting the right technology provider is quite rightly considered as “a significant risk”: a move from a distributed service to a centralized one is a fundamental change in technology and service provision. How the performance of that service is measured is a key part of understanding when that service is delivering as expected, when that service should be kept in-house or moved externally.
The first question that we have to ask is, “Is the old problem of measuring applications performance and end user experience for Citrix XenApp (MetaFrame/Presentation Server), and Microsoft Remote Desktop Services/Terminal Services related to the new problem of measuring applications performance and end user experience for desktop virtualization solutions from vendors such as Citrix, Ericom, Quest and VMware”?
While Desktop Virtualization and Presentation Virtualization have many things that are different between them, they share the following core elements:
- The user’s application(s) no longer execute on their local desktop workstation or laptop. Rather they run either on a shared server (in the case of Presentation Virtualization), or in a dedicated copy of a a desktop operating system (in the case of VDI) running in the back end data center
- The user interface of the application is not presented locally to the end users’ device by the application. The application presents the user interface to what it thinks is the device driver for the end user’s display. This device driver is in fact running not on the end users’ device, but rather on either a back end server shared by many users, or on a back end instance of an operating system dedicated to a user.
- The actual user interface to the application is remoted from the back end server or desktop OS through a network to the end user. This is done via a protocol that captures what is moving around on the screen of the end user, and that sends just the changes in the screen over the wire to the end user (this is a dramatic oversimplification of what these remote protocols do, but as a concept it is correct).
- The presence of the network in between the users and their applications is the key technical challenge which creates many of the applications performance and user experience challenges with these approaches. As opposed to a simple application like Microsoft Word presenting its user interface to a local display driver on a workstation, which from there puts the user interface on a monitor, a TCP/IP network is inserted in between the user interface of the application and the actual end user device used by the end user.
- The protocols that are used to remote the user interface to the end user across Presentation Virtualization and Desktop Virtualization are largely the same. Citrix ICA/HDX is used for both. Microsoft RDP is used for both. The only reason that VMware PCoIP is not used for both is that VMware does not have a Presentation Virtualization solution to offer its customers.
- You can’t “have your cake and eat it too”. IT wants everything to be standardized (to maximize their productivity) and users want to be able to customize everything and install their own applications (to maximize their productivity). This problem relates to performance management in that an infinitely variable centralized environment is not likely to be be any easier to performance manage, than infinitely variable fat client desktops and laptops.
- The services only work when the user is connected to the server that is serving their applications sessions out to them. Absent a reliable, low latency, and fast network connection between the centralized desktop and the applications and data and these solutions can experience performance issues.
- It is not uncommon to have a situation where users will take any performance problem (whether real or just perceived) and blame either the Presentation Virtualization infrastructure, or the Desktop Virtualization infrastructure. This puts the IT team that engineers and supports these environments in a “guilty until proven innocent” position and creates a problem that has to be solved with an appropriate performance management solution. This can impact the business in terms of wasted effort and resource on behalf of the IT team, and lost productivity for the user community.
The bottom line is that if your user community does not need (or cannot live with) a standardized set of applications that can only be accessed when connectivity is present, Desktop Virtualization and Presentation Virtualization solutions will struggle to deliver an effective operational environment. While these solutions have some important technical differences, they largely address the same use case, use the same basic approach to running applications, and therefore represent very similar performance management challenges.
Recent Dynamics and Changes
While Microsoft Remote Desktop Services (formerly Terminal Services), and solutions built upon it (such as Citrix XenApp) have been around for some time, Desktop Virtualization (VDI) is the new kid on the block. Desktop Virtualization has introduced an important new set of dynamics and requirements into the mix:
- With Desktop Virtualization every user gets their own virtualized Desktop OS with their own set of applications running on that OS. This is different from Presentation Virtualization where there is a unique instance of an application for each user but the underlying OS is a shared server OS, shared by all of the applications and all of the users.
- The one-to-one mapping of users to applications and operating system instances has given rise to the concept of Desktop as a Service, implemented either by forward thinking IT departments, or by cloud providers like Desktone.
- The benefits of centralizing servers has lead to an effort to get the same benefits from the centralization of desktops.
- The desire to get the benefits of centralization of servers applied to the centralization of desktops has put tremendous pressure on both internal and external service providers to be able to provide a “Desktop as a Service” experience that is acceptable to end users, and that can be aggressively rolled out to capture a greater slice of the business PC market.
As mentioned above, both Desktop Virtualization and Presentation Virtualization replace relatively simple environments (a PC with local hard disk, memory, CPU, and Operating System and applications), with a very complex shared back end infrastructure. We’ve discussed the Desktop Virtualization Iceberg before of course. Yet if we look at a service as a part of a whole there is even greater complexity, which is captured (in part) by the diagrams below:
Does Citrix EdgeSight Address These Issues or Not?
Citrix has been developing and marketing its EdgeSight product line with the intent of addressing these issues for quite some time. Ignoring the version of EdgeSight for pre-production testing and sizing, there are three editions of EdgeSight:
- EdgeSight for Endpoints – Continuous and comprehensive visibility into application performance at the desktop, enabling IT professionals to identify issues before business is disrupted and restore service in the shortest time possible.
- EdgeSight for XenApp – Session level application and system performance monitoring for Citrix XenApp infrastructures provides real time performance information for proactive problem solving and infrastructure optimization .
- EdgeSight for NetScaler – Browser level web application performance monitoring, identifying performance issues for users at any location and providing critical visibility into problem origin.
- EdgeSight for Endpoints relies upon a client side agent that is Windows specific. That agent does a pretty good job of understanding applications performance from a resource utilization perspective, but it does nothing to help you understand applications performance. Therefore even if your end point devices are Windows PC’s (instead of thin client terminals, smartphones, or tablets), EdgeSight for Endpoints provides you with little real-time visibility into applications response time or end user experience.
- EdgeSight for XenApp relies upon an agent that runs on the XenApp Server(s). This agent collects a deep and comprehensive set of statistics about the resource utilization of the XenApp server. The only true “performance” statistic that EdgeSight for XenApp collects is how long it takes for logon’s to occur which is one critical Key Performance Indicator for a XenApp farm, but one that has nothing to do with an applications performance, or end user experience for XenApp Delivered applications.
- EdgeSight for NetScaler appears to be the only part of the product line that can bring to the table true applications performance information (response times). Since the NetScaler lives on the edge of the web server farm it sees requests made to URL’s living on web servers and responses on the part of the application system to those requests. Since this is based upon HTTP instrumentation it is obviously limited to web applications. While this is a valuable capability, it is not unique at all, and similar (and better) capabilities exist in the APM solutions from BMC (Coradiant), Compuware (dynaTrace), Quest (Foglight), New Relic, AppDynamics, BlueStripe, and Extrahop, and AppFirst.
Lest this be perceived as unduly harsh upon Citrix, it must also be pointed out that none of the other vendors of a Presentation Virtualization or Desktop Virtualization application delivery platform offer a solution that helps solve this problem. This is true for Microsoft, Quest, Red Hat and VMware as well as emerging vendors like Ericom and Virtual Bridges. Solving this problem is therefore left to the vendors who specialize in addressing next generation performance management issues, not vendors of platform solutions.
Requirements for Applications Performance and End User Experience Management in Presentation Services and Desktop Virtualization Environments
- Performance metrics need to be focused on the user experience. Performance is less about the CPU/memory of the back-end desktop host – and more about the response time that applications delivered over PV or VDI infrastructures are delivering to the end users of these applications, or if one chooses to look at it from the users’ perspective, the actual experience of the end user. Note that the end user does not know nor does the end user care how much resources are consumed in between their click on the screen and the answer – the user just cares that it happen quickly and consistently.
- We need to recognize that it is a current feature of remote protocols (ICA, RDP, PCoIP, Spice et.al) you want to use that the request/response notion of a transaction that exists in HTTP is purposely obliterated. Remote Protocols do not send “real” data over the wire, (which can also be a benefit – it is how they meet the requirements for data security in sensitive situations like health care and financial services). So the difficulty in solving this the applications performance and end user experience problem stems from the fact that Desktop Virtualization and Presentation Virtualization solutions have been designed to solve a different problem, and have created this problem as a result.
- The end-to-end complexity of these environments needs to be considered. The bottom line is that when you deploy either a Presentation Virtualization or VDI solution you are replacing a very simple system (a PC with a hard disk, a CPU and some memory) with a very complex back end infrastructure often consisting of shared storage, server farms, layers of supporting infrastructure servers, and finally a network over which it all has to be delivered to the end user.
- Interpretation of data to allow decisions to be made quickly is key. Commodity data will equal commodity results. What this means is that any solution that just pulls data from WMI, the Citrix API’s, SNMP, SMIS, or any other public interface is just going to regurgitate the obvious in a pretty report or dashboard that will do nothing to actually solve the problem. This is a hard problem and it will only be meaningfully attacked by vendors that go out of their way to get real response time data or latency data that is not available via some standard and relatively useless API.
Recognizing that there is no “perfect” answer to this problem and therefore no one product that fully and completely solves the problem, here are some next generation solutions that can help you make a good dent in the issues and can help you further the penetration of desktop virtualization and presentation virtualization solutions into your environment.
Ovum’s research found that desktop virtualization currently represents approximately 15% of the business PC market. However, this figure is dominated by the Presentation Virtualization model (12%), typically used in call centre-type environments, and has been for the last 10 years.
If PV/terminal services are excluded, the next generation of solutions aimed at CIOs, from the likes of Citrix, Quest and VMware, hold less then 3% of the market, showing that many CIOs are holding back from taking the plunge.
Citrix has been in the business of centralizing the execution of end user applications in back end data centers since WinFrame was launched in 1995. Two things have always held back the further penetration of the centralized approach to end user computing. The first was the lack of the ability of the user to work off line and to fully customize their environment. The second was that tools never came into being for this method of delivering functionality to end users where it was possible to measure and assure applications response time and end user experience.
Desktop Virtualization has created new hope for broadening the appeal of the centralized approach, but now falls prey to the exact issues that have always held back Server Based Computing – delivering and maintaining a ‘personal’ service has been difficult in a shared environment. Desktop Virtualization could push past 15% penetration of enterprise desktops given technologies are available to address offline working (client hypervisors such as MokaFive and VirtualComputer’s NxTop) and user driven customization issues (AppSense, RES, Scense, TriCerat) are available.
However, unless the ability arises to assure the applications performance and end user experience to the actual end users it is unlikely desktop virtualization will hit the heady targets other analysts predict beyond 2015. Moreover, the move from a distributed desktop model needs not only a technology shift, but a service mindset change. Defining a strategy centered on the user is the first step many should take: this will involve understanding performance from a user’s point of view. The performance management bar is in particular being raised by the fact that tools now exist that allow a level of applications performance assurance for applications delivered to end users via every other method except presentation virtualization and desktop virtualization.