Virtualising Citrix XenApp is a Waste of Time and Effort

What is the point in virtualizing your Citrix XenApp Server? Consider that the goal of server virtualization is two fold – to make best use of idle computing resource;  and to provide standardization and automation so to reduce the time to build and deliver new servers, or recover and restore broken ones. Is that a desirable and achievable goal for a Presentation Virtualization (PV) server such as Citrix XenApp? Of course: but its likely done already. Why add another expensive layer of software?

PV’s benefit is its capacity for high user density, and ease of management. With a PV server,  users share the operating system environment, but each have their own independent session. A PV server could support 50%-100% more sessions that a hosted desktop solution, can that be improved an underlying virtualization layer?

PV servers, such as Citrix XenApp, are often cited as being ‘unvirtualizable’. They typically run with high utilization (sometimes too high) of CPU and memory resources, possibly even disk. As PV farms often need to support a high number of users, core server builds tend to be standardized, and application deployment to those servers automated. If you’ve already a standardized and automated environment, if you’ve already high hardware utilization – why go to the bother and cost of adding in a service that essentially does the same thing?

What could virtualization of your Citrix XenApp environment possibly do for you?

…well Reg, there’s Hardware Abstraction for a start…
Virtualization’s goal of standardization and automation is achieved by abstracting the server operating system from the device hardware. Show me a XenApp farm where all the servers are exactly the same, and I’ll show you a very new (or very small) XenApp farm: or a enviously persuasive IT manager. As a farm changes over time it is not unusual to have a range of server hardware in use. This leads to a problem of managing and maintaining builds for different hardware platforms, different different drivers, different patching requirements. A new server type necessitates a new server image.  By having hardware abstraction, the build management process is more simple. It can be one build across devices.  The introduction of new hardware does not need major changes, and isn’t another image to be added to the image management system.

…and availability, don’t forget availability ..

a server farm has a number of servers in it. Redundancy for a PV server is provided by having more than one server configured with user applications. If a server fails, users sessions will be terminated – but they can connect again and carry on working. While the users can ‘carry on’, they have been  disrupted. With virtualization, it is possible to move a server instance to different hardware in case of failure, or to isolate a server instance that is behaving erratically. It is not possible to move individual user instances between servers, but you do have ability to move servers hosting multiple users.

…there’s consolidation as well..
Citrix XenApp (as did Presentation Server/MetaFrame ) farm design often called for servers to be placed into “silos”. This for a number of reasons:

1. Allowed resource demanding applications to be isolated from the main ‘standard/desktop application delivery servers.
2. Allowed applications that may conflict when installed on the same server (e.g. different versions of Microsoft Office, or Internet Explorer) to be hosted.
3. Allowed non-standard applications to be installed and configured on servers outside of the standard build.
4. Allowed servers to be managed by different groups (e.g. third-party to manage an application, or to have development and test environments).

Silos lead to servers being under-utilized. There are ways to reduce the number of servers that need to be  put into silos. Application virtualization such as Citrix Application Streaming, Microsoft App-V or Symantec’s Workspace Virtualization, performance management tools such as Appsense’s Performance Manager or Tricerat’s Simplify Stability can be used. And to this you can add virtualisation – to have specific use PV servers consolidated together to reduce the need for ever expanding hardware.

.. and you’ve got performance to think about

“Performance?!”, you may splutter. Adding a hypervisor reduces capacity for a server running Terminal Services doesn’t it? You could argue that the native hardware interaction is going to deliver the best performance: and you would be right. However, while the difference in performance used to be large for virtualised Terminal Services servers, this has improved. Statistics from Project VRC suggest – in their analysis of Terminal Services workloads running on the latest generation hardware and hypervisors show that while a bare metal server still has the edge , the performance of virtualised servers was comparable as long as resources weren’t over-committed on the host.

.. and managing capacity..remember what it was like when we maxed out on 30 users per server

Typically terminal server farms have expanded out because of memory limitations. With the introduction of Windows 2008 R2′s x64 environment such limitations are no longer a concern. Yet, there are still high number of PV servers running x32 Windows 2003. With Windows 2003 there is an opportunity to increase user density and capacity by upgrading to Enterprise Edition and increasing the physical memory. But, this is an expensive option for each and every server:  yet perhaps more practical than adding additional servers of the same specification,  or migrating to x64.  An additional option is to virtualise those servers. New servers can be used to consolidate older servers. Running Windows 2003 standard x32 Terminal Server workloads without virtualization no longer makes sense.

A Waste of Time and Effort?
In spite of the hardware abstraction allowing easier image management and OS upgrades; in spite of options for higher availability and faster recovery, even fail-over; in spite of enabling silo consolidation; in spite of enabling managing user capacity on servers – especially for x32 environments … what could virtualization of your Presentation Virtualization environment possibly do for you?

Andrew Wood (144 Posts)

Andrew is a Director of Gilwood CS Ltd, based in the North East of England, which specialises in delivering and optimising server and application virtualisation solutions. With 12 years of experience in developing architectures that deliver server based computing implementations from small-medium size business to global enterprise solutions, his role involves examining emerging technology trends, vendor strategies, development and integration issues, and management best practices.

Connect with Andrew Wood:

Tags: ,

6 Responses to Virtualising Citrix XenApp is a Waste of Time and Effort

  1. Mr. X
    November 8, 2010 at 11:34 AM

    I strongly recommend reading the “Virtual Reality Check – Phase II version 2.0″ white paper you mention at http://www.projectvrc.nl/index.php?option=com_docman&task=cat_view&gid=39&Itemid= (free registration required). It points out a number of things regarding the virtualization of XenApp:

    - It definitely makes sense to virtualize Terminal Server/XenApp, but you will gain more by virtualizing the 32-bit versions than the 64-bit versions.

    - The Intel Nehalem processor architecture is a game-changer regarding the processing power it brings to virtualization.

    - On a Intel Nehalem host with 16 logical processors the optimal configuration is now 4VMs with 4vCPUs, instead of 8VMs with 2vCPUs.

    I definitely recommend reading the paper. vSphere, XenServer, and Hyper-V are all discussed.

  2. Mark
    November 19, 2010 at 5:59 AM

    ” it is possible to move a server instance to different hardware in case of failure”

    The ability to move guests around in the event of failures is often touted as an advantage of virtualisation. However in reality this is not always the case. If a host has gone down, how do you get the guests moved off? They’ve gone down with the host.

  3. November 19, 2010 at 7:28 AM

    Mark,

    Sure, its not magic – if you’ve suffered a catastrophic failure and all lights are out, you’re sunk in the same boat as a physical server. There are failures – be it with components or process usage – where being virtualised affords you the ability to move instances. But you’ve a valid point.

  4. November 19, 2010 at 8:12 AM

    Mark,

    The ability to move guests around requires the use of shared storage. Because I am using shared storage amongst a cluster of my virtualization hosts, other hosts have access to the same storage. There is no need to ‘move’ the guests off the shared storage but to automatically boot the workloads on a new host (High Availability solutions). If you can ‘predict’ a host failure due to hardware monitoring then you can also vMotion/LiveMigrate the VMs before the failure.

    It is quite easy to “get the guests moved off”.

    Edward L. Haletky

  5. January 8, 2013 at 4:39 AM

    Hi Mark,

    Stumbled upon this article while searhcing for something else. Think youŕe missing one additional point when going for a PV server. You also add tons of flexibility. If you’re working in an organization where use cases are shifting and it’s implicating a different use of resources, a virtualization layer also provides a lot of flexibility to differ in the use of your resources. Or scale up / down when needed. For instance, you could assign blades to XenApp, but quickly repurpose them for VDI or something else. Project VRC has some nice takes on this perspective indeed, maybe a good time to rethink the article?

  6. January 8, 2013 at 10:47 AM

    Michael,

    Thanks for taking to the time to contribute. You are indeed correct – virtualisation gives the option of flexibility (e.g. moving underutilised servers to a common host for example). Obviously you’d want some control in there as reference architectures typically advise against sharing RDS/TS loads: but that in itself is controllable. XenApp has expanded on its features and services since 2010 that can make better use of this features, and likely will do so even more as part of Avalon.

    Indeed, Mark discusses limitations here due to shared storage, which were true, but then modern hypervisors can give live migration capabilities *without* sharing storage.

    You’ve also (perhaps) the consideration of utilising cloud based services from amazon/azure/rackspace et al.

    2010.. yes indeed maybe time for an update :)

Leave a Reply

Your email address will not be published. Required fields are marked *

Please Share

Featured Solutions