In March 2013, Citrix announced they had GPU sharing working and available for XenApp (multi-user/RDS). In December 2013, they announced it was available for XenDesktop (multi-OS/VDI). This has been a major barrier to adoption for many companies that need the ability to deliver a high-end multimedia experience to their end users in order to gain acceptance for adoption.
The primary drivers for adoption are and have been centralization and security, or cloud. This also ties into Citrix’s mobile strategy, which I spoke to Chris Fleck, VP of Mobile Strategy and Alliances about in a previous article/podcast. The most important tie-in is that an actual GPU card is not required on the end point device, because it exists on the server or data center side, and all the heavy lifting is done on the back end or data center side. The device has only a minimum requirement of resources, which can be found on virtually every Citrix-ready (this is because Citrix embeds codecs, among other things, in the Citrix Receiver, which is the name of their client, at least for today) device on their compatibility list in order to be able to decompress the data delivered via their proprietary protocol HDX.
So I reached out to Derek Thorslund, Sr., director of Product Management for high-definition experience, or HDX, at Citrix and asked him if he would spend a few moments speaking with me about the vGPU progress, because I believe this is not reverberating the way it should. This is a paradigm shift, folks. The future of server-based computing and where it lives under the hood of cloud for business use in particular is evolving to the next level, and it seems like everyone is sleeping through it.
The HDX team at Citrix is responsible for the rich application multi media experience. They have been dealing with the challenge of multimedia since the beginning of Citrix back in 1988. In 2006, they started a project with Boeing, and ultimately NVIDIA, that became vGPU (GPU sharing). While NVIDIA will not be the only GPU manufacturer they will be partnering with, it is the first, and it has features that are unique to them. One is the feature that provides GPU sharing for VDI workloads, which goes directly to the NVIDIA card without any sort of hand-off via the API (application programming interface). Another barrier Citrix has influenced is the fact that, historically, server manufacturers have been reluctant to change the form factor design (the server hardware design) to allow a video card to be installed. Recently, all of the Tier 1 (HP, DELL, IBM, etc.) and some of the Tier 2 manufacturers have announced servers that will support the NVIDIA Grid K1 and K2 GPU Cards.
This is a paradigm shift, because historically the only real solution for High Definition Graphics was tied to HP Blade PCs or HP Workstations Blades, which required a 1 to 1 hardware Blade connection. This means one entire Blade per user. Blade PCs and workstation Blades live in the data center and had GPU cards in them dedicated to the assigned user. The Blades ran a Microsoft client OS, and the end point devices did as well, because of lack of codec support in Linux. Citrix has been able to address the vGPU requirements and the codec requirements for Linux as well. This is a big deal, because of the growth of Linux mobile devices in the ecosystem specifically. It’s pretty common knowledge that Linux advances at Citrix have been stunted because of their historically close relationship with Microsoft. Our conversation is much more in depth; I just wanted to hit the high points here. Please take a listen to our conversation, Citrixgurls Virtualization EUC Podcast Episode 3.