Open Sorcerer, XenServer Ambassador—Tim Mackey Speaks from the Citrix Mother Ship

Recently I had the pleasure of speaking with Tim Mackey on the User Experience Podcast, Episode 8.

Tim has been with Citrix for ten years and has been working with XenServer exclusively since 2009. Our conversation revolves around what the XenServer density capabilities are. There has been an ongoing conflict between density stories about Citrix and VMware. The way Citrix defines it centers around how many VMs Citrix “supports” vs. how much VM density can actually be achieved. These are two very separate things in the Citrite dictionary. Tim goes into the actual numbers in our conversation, so if you are interested in hearing it straight from the horse’s mouth, I encourage you to listen to the podcast.

Most hypervisor companies use something called Login VSI to automate their testing. This works along the same lines as WinRunner and LoadRunner, which is what many Citrix admins are used to using for XenApp automated load testing.

XenServer has event channels associated with every device, vDisk, mounted CD-ROM drive, etc. Each channel looks at what the limits and bottlenecks around a device are, what could be done to limit its drag on CPU resource usage, and how these issues might be mitigated.

In our talk, Tim and I walk through the performance benchmarks for XenServer 6.2. We discuss clustering in terms of data centers vs. cloud and consider what is up around consumption, I/O and resource pools, and more. Citrix seems to be chiefly interested in template management in tenant scenarios. The message seems clear, at least to me, that for Citrix, the XenServer is never the limiting factor with regard to density; instead, the limiting factor is the hardware the XenServer lives on.

Provisioning servers were brought into the conversation in the context of migrating hypervisor VMs from other providers to XenServer. Supporting this migration, of course, forces an annoying degree of driver compatibility testing. In most senior Citrix engineers’ heads (or at least mine), it seems like a whole lot more trouble than just creating a new master VM in XenServer and moving on with your life (Tim, incidentally, agrees).

We then chat a little about the methods of delivering applications in the context of VDI, but that is not really germane to Tim Mackey’s focus.

Kewl Factor Alert
NVIDIA GRID Graphics on XenServer (as opposed to competitors)

What matters is how well the NVIDIA cards are supported by the specific driver; this is the big difference when using these cards in a XenServer environment vs. VMware or Hyper-V. The basic requirements for using these cards are:

  • XenDesktop 7.1
  • XenServer 6.2 Service Pack 1
  • NVIDIA GRID K1 or K2

Companies like Autodesk and other graphically intense application partners do not have to worry about how their applications will work with the NVIDIA GRID card in a XenServer environment, because XenServer is basically removed from the equation. There is no Citrix driver there, so XenServer looks invisible because of its split driver model. XenServer has included that host driver through a unique channel; it just looks like a straight NVIDIA driver, period. That’s how Citrix has been able to get the VM density in vGPU environments that others cannot. To be clear, whether to server VMs or VDI VMs, you can assign the vGPU card to the VMs exclusively if you have the use case or you can split the vGPU assignment to multiple VMs (which is typical in VDI scenarios). For example, in other hypervisors, the application does not interact with the NVIDIA driver, but rather with a VMware or Hyper-V driver, vs. the direct pass-through that you get with XenServer. NVIDIA is not required on the end point; Tim gives a great example of what mobile capability actuality looks like in a reality type scenario.

According to Tim, the future of XenServer is about defining new capabilities and growth markets, such as desktop virtualization, among others.

I asked what Citrix is doing around Desktop as a Service à la VMware’s Desktone positioning, and Tim mentioned Project Merlin, which is a DaaS for enterprises, and that is all I got on that subject—apparently it’s still a project in motion.

Tim is part of Citrix’s Open Source Business Office ( He is heavily involved in the Xen Project hypervisor (for practical purposes, this is the same as Citrix’s core XenServer code, just out in an open source repository), OpenDaylight/Apache CloudStack (Apache committers inside Citrix), and the Xen Cloud Project, which exists to explore different ways XenServer can be used in open environments.

When asked why Citrix is not more open, Tim says that, like most things, “It’s a process.” Also, the obvious thing here is that if you download the bits from the repository, you cannot expect the support you would get if you downloaded them from and actually had a support agreement in place. The upshot is that because the bits are identical, you can still buy support, regardless of where you get the source software.

There is a caveat, however— uploads nightly snapshots to the open source repository. Seriously, people: if you are a production shop, can you not put the nightly uploads of code that has not been fully tested into production? Do you want to cause an outage? Production, by definition, does not include bleeding-edge code unless you’re asking to be walked off your job. However, if you want to put the nightly snapshots into your lab or test environments and work with it there, Citrix is working on a more streamlined way for you to submit your feedback.

You can follow Tim Mackey on Twitter @XenServerArmy. He has a SlideShare account,, and he has a blog on, where he releases scripts on a regular basis.

Turns out, Tim Mackey has become one of my favorite people to interview. I sincerely hope you enjoy our conversation and find it informative.

Posted in End User Computing, IT as a Service, SDDC & Hybrid CloudTagged , , ,