So, you’ve run through the application analysis and vendor engagement phases. Ideally, these two phases will have provided you with (a) pertinent information about your environment, and (b) the solution or solutions that may work best to deliver the applications in your environment. The next logical stage of the project is to move toward a PoC (proof of concept) and pilot phase.
People often use these two terms interchangeably, but I consider them to be quite different beasts. (Thanks are due to former The Virtualization Practice analyst Mr. Andy Wood for his joint BriForum presentation with Barry Coombs, which led me to consider this important delineation.) The differences between the two are outlined later.
Before starting any PoC or pilot, it is very important that the relevant application analysis phase has been done. I’ve seen many people jump into running what is essentially a full pilot without understanding just what it is they are trying to deliver. Delivering a small, common, and often simple subset of applications can be a false dawn, as users and management get excited about the potential of new infrastructure without seeing it tested against the challenging applications that exist in most, if not all, enterprises. I get the feeling I am becoming like the proverbial broken record recently, but I will say it again: application analysis is key.
The vendor engagement phase should also have been completed, and should have shown which sets of technologies (or combinations of sets) might work for your environment. The PoC (not the pilot) will decide which of these is most suitable, and this information can then be fed into the pilot. It’s up to you how many different technologies you pass into the PoC phase, but you will be driven by the output of the previous phase and also by factors such as budgetary constraints, procurement rules, and infrastructure capacity, among many others.
Ideally, in the interests of keeping any PoC simple, I generally would pick no more than three distinct “solution groups” for this phase. By “solution group,” I mean the entire suite of technologies required to deliver the project—you can mix and match these as necessary. For instance, I worked on a project recently for which it became quite clear in the vendor engagement phase that the application virtualization requirements would be met by Numecent Application Jukebox. The PoC then became a case of assessing what other suitable technologies fit in around that. We ended up looking at App Jukebox + Horizon View, App Jukebox + XenApp, and App Jukebox + XenApp + AppSense. Each solution group varied in terms of the requirements it met, and the PoC was intended to discover just which of these represented the best value to the business in terms of requirements fulfilled versus cost and complexity. Once you’ve discovered which solution group or groups are the best (there may be more than one, depending on the exact goals you are working toward), these can then be fed into the pilot. This nicely leads us into discussing just exactly what the functional differences between the PoC and pilot are.
PoCs are usually environments that function more as proving grounds for validating that the components you selected in the vendor engagement phase can actually work together. You could run PoCs in an old-fashioned lab or in segregated PoC environments, which I’ve seen referred to as “model offices,” “technology centers,” and a whole host of other names. You could even run then as some sort of roadshow if you’ve got many distributed offices. This will help ensure that you get the right validation from various areas of the business, although you probably need to limit the range of the PoC user-wise. You don’t have to cover all of your applications in this phase, but I’d suggest you pick some of the business-critical ones, to demonstrate that the key applications will be served by the new solution you’re demonstrating. Depending on what comes out of the PoC, you may find that the technical design needs to be revised or otherwise revisited. The PoC will feed into the pilot, possibly narrowing down the field of technologies that you’re considering (although not necessarily!), and it will be removed once the pilot is initiated.
The pilot should be different from the PoC in that it should involve an end-to-end solution covering all of the components that make up the project. Delivered to a limited number of users right across the business, it should be done in stages and encompass extensive testing of the solution. Data from the application analysis phase should help you identify the users who will be most useful for participating in the pilot. Another interesting takeaway from Andy and Barry’s session was the mention of “technology champions”: individuals from each business unit who are engaged to actively participate in testing and “champion” the cause of the new solution into their particular business areas. I have to agree with this approach, because there are often projects where particularly tech-savvy users can become vocal critics of any changes being made. Like Henry Hill said in Goodfellas, “make ’em partners.” Get them on board as part of the pilot test group and allow them to feed their observations into the project from an early stage. If you can satisfy the most demanding users and applications from the start, then everything else should fall nicely into place.
You may need to extend the pilot (or the scope of it) if testing reveals issues that necessitate changes to the overall design. The pilot should also be removed (if it fails—I’ve seen places where pilots that fail simply live on forever, creating vast swaths of abandoned zombie infrastructure), or if it succeeds, it can be scaled up to become the “live” environment over a period of time.
PoCs and pilots, in my experience, fail mainly because people don’t select the right applications and users to participate in them. I’ve seen XenDesktop PoCs that looked fantastic because they were simply running Office, IE, and Adobe Reader, but as soon as they introduced heavier applications, the performance became unbearably poor. I’ve seen pilots running that ticked every box and reported very few issues—but as soon as other, more demanding users came on board during live implementation, reports of problems to the help desk escalated rapidly.
It’s also very important to do proper monitoring and data collection during the PoC and pilot phases. This is often neglected because there is a perception that monitoring is only required for live environments. It may sound simple; however, the intent of a having a PoC and pilot is to provide a solution that one day, assuming all goes well, will actually become the new live environment. Why would you not monitor the performance and measure the user experience from day one?
The PoC/pilot phase is vitally important, because this is the stage where real users start to interact with the solution that you’re intending to deliver. It’s where the vision meets reality—often accompanied by the feeling that you’re coming crashing back down to earth. Make sure you do it properly, and run the most challenging applications and users through this phase to ensure you’re testing your solution to its limits. Don’t be afraid to revisit the design and extend the phase because of unforeseen issues. And be realistic about what you’re trying to do. Stick to your defined goals, and ensure that no one small problem is allowed to derail the project. We can never design the perfect solution, but if you approach the PoC/pilot from the perspective of “hardest first,” you may get a lot closer to it than if you go about it the opposite way.
Share this Article:
Latest posts by James Rankin (see all)
- Anatomy of a Desktop Virtualization Project #3: PoCs and Pilots - July 8, 2015
- Can One OS Rule Them All? - April 29, 2015
- In Search of the “Nirvana App”: Cloudhouse, Next-Generation Application Virtualization - March 19, 2015