Who or what is EUC? In an industry plagued by TLAs (three-letter acronyms), EUC, or end-user computing, is the new nomenclature for VDI, or virtual desktop infrastructure. This is not just the emperor’s new clothes, but a redefinition of the paradigm of adopting a more inclusive view of the software, hardware, and processes that shore up the client side of corporate infrastructure.
Traditionally, VDI centered on just the infrastructure concerned with deploying virtual machines and the remote protocols utilized to access it. This definition has become tired and shortsighted in today’s brave new world of tablets, phablets, smartphones, and all the other modes of access.
EUC is much more inclusive in that it covers the gamut of access devices, methods of access, delivery platforms, and delivery methods, including traditional VDI, SBC, DaaS (Desktop as a Service), application virtualization, and user virtualization. It also covers MDM (mobile device management) and the ancillary tasks associated with allowing BYOD (bring your own device). BYOD used to mean “bring your own desktop,” but as we all know, the market moves on.
The term “EUC” is obviously a lot more inclusive in this brave new world of Martini computing (any device, anytime, anywhere), and rightly so. How did we get here, and what has prompted the name change?
The term “VDI” was first mentioned by VMware in 2005, and every year since has been the year of VDI, in which VDI has been expected to become the desktop delivery method of choice. However, when a solution is sometimes three to four times the cost of an equivalent physical desktop solution, you are just not going to get that one past the bean counters, no matter what the promised OPEX savings. (At least, not not any more; it was possible prior to 2008, when money was free.)
But there is a thing that most analysts do not seem to understand, and that is that the world is not Silicon Valley. Just because a market is set to be X dollars a year by year X does not make it so. It is only smoke and mirrors. Early visions did not take into account the fact that a user’s access device is usually in service for several years. In my other role, as an IT consultant advising my clients about their strategies, I regularly visit sites where companies’ users are still running desktop devices utilizing Windows XP, never mind Windows 7. And don’t forget that only last week (on November 13, 2015), Paris’s Orly Airport was closed due to the failure of a critical application that was running on the Windows 3.1 operating system, or to be more precise, MS-DOS with a Windows interface. This is the real world. Since the 2008 financial crash, there has been a change in companies’ purchase plans. They will sweat an asset much longer than previously. The three-year purchase cycle has become, on the whole, five years; in some companies, I have seen a seven-year purchase cycle. This situation is commonplace for many companies, and especially so with desktop devices when you venture outside the ivory towers that are Silicon Valley companies and the Fortune 500. Windows 7 migration programs and Windows 10 may not even be a twinkle in their eyes yet. It is these companies that are now starting to look seriously at EUC as a valid option for their next-generation method of access to corporate applications.
Advancements in storage technologies, like cheaper and more reliable flash devices and larger DRAM modules, have made server-side acceleration products, such as PernixData’s FVP, Atlantis Computing’s USX, and even VMware’s own Flash Cache products, viable in increasing the actual performance of virtual machines without having to spend millions of dollars on state-of-the-art enterprise SAN arrays. Other products, like the new entrants into the storage arena, have lowered the cost point for entry to enterprise-level functions. Companies like SimpliVity and Nutanix have further lowered the bar by reducing the complexity of the infrastructure required.
Desktop brokerage products have improved greatly, and the number of target use cases for remote desktops has increased. The technologies permitting these developments include ways to employ virtual GPUs and the offloading of graphics-intensive tasks to dedicated GPU cards. Remoting protocols with PCoIP (VMware) and HDX (Citrix) have seen major advancements.
Due to these improvements, the cost of entry is not as prohibitive as it used to be, especially if a company redeploys its existing desktop devices as endpoints for the virtual device.
This does not even take into account the advantages of a centralized model with regard to data management and data locality, increased security resulting from not having data on mobile devices such as notebooks, and the like.
Further advances in desktop deployment methodologies have now made flexible device deployment possible. Flexible device deployment rests on a series of technologies and tools, ranging from remote desktop deployment systems (VMware linked clones, VMware App Volumes, Citrix PVS), to application virtualization (VMware ThinApp, Liquidware Labs FlexApp, Microsoft App-V), to user personalization and profile management (AppSense Environment Manager, RES Workspace Manager, Citrix UPM, Liquidware Labs ProfileUnity, VMware User Environment Manager). Truly mobile application layering technologies, like Unidesk and even VMware’s Mirage product lines, have added to the value proposition.
Now, I am not saying that 2016 is the year of the desktop, but I have most definitely seen an uptick in proofs of concept, and those POCs are actually moving into production. Further, those production environments are delivering true value to businesses and are being accepted by their users as better than the users’ previous desktop devices. It may finally be the year that the worm turns.
Share this Article:
Latest posts by Tom Howarth (see all)
- That Was the Year That Was: 2016 - January 16, 2017
- Docker Has Been in an Acquisitive Mood Again, This Time Pulling in Infinit - January 9, 2017
- Acquisitive LANDESK Bought Out - January 5, 2017