Articles Tagged with Intel

DesktopVirtualization

Client Hypervisors: Intelligent Desktop Virtualization too clever for its own good?

DesktopVirtualization

In 2011, we asked if  Client Hypervisors will drive will the Next Generation Desktop. Yet, other desktop virtualization industry experts, such as Ron Oglesby, decided the technology was a dead man walking, writing off Type 1 Client  Hypervisors.

Fight? Fight? Fight?

While VMware moved away from client hypervisors, they had to agree that an end user compute device strategy must encompass non-VDI. Their Mirage technology can be considered desktop virtualization, but it is not a client hypervisor. Client hypervisor vendors such as Citrix (who subsumed Virtual Computer’s NxTop) , MokaFive, Parallels, Virtual Bridges and joined by Zirtu. Organisations like  WorldView look to innovate on desktop vitualization through containers rather than full virtualization.

Tablets. Touch Screen capable laptops. Hybrid devices with detachable screens. The netbook might be dead, or they could just be resting. The presence of tablets has undeniably shaken the netbook market but businesses still need powerful, capable laptops.

Bring Your Own Pencil aside – there is still a need to manage “stuff”: still large and small organisations who need to manage the delivery of IT including the end device. The question remains how are devices, and the all important data and applications on them, managed? Hosted and session based desktops have their place – but offline capable device requirements will remain.  Is Intelligent Desktop Virtualization the same as client hypervisors?

Read More

VirtualizationSecurity

Bromium unveils micro-virtualization trustworthy security vision

VirtualizationSecurity

One year after announcing that he and XenSource co-founder Ian Pratt were leaving Citrix to launch Bromium with former Pheonix Technologies CTO Gaurav Banga; Simon Crosby was back at the GigaOM Structure conference in San Francisco today to unveil Bromium’s micro-virtualization technology together with its plans to transform enterprise endpoint security. Bromium, despite the occasional blog post calling into question the security limitations of current desktop virtualization solutions and despite today’s announcement of the Bromium Microvisor,  has very little to do with desktop virtualization. Desktop virtualization whether it be VDI, or IDV or anything in between, is a management technology, a means of getting an appropriately specified endpoint configuration in front of the user. Bromium has set itself a bigger challenge, one that is applicable to every endpoint and every operating system – the extension of the precepts of trustworthy computing to mainstream operating systems.

Read More

DesktopVirtualization

And then there were three – NxTop Enterprise morphs to XenClient Enterprise

DesktopVirtualization

Like waking up from a scene in a night’s dream where you were on a lovely walk, to find yourself stood outside of your now locked hotel room wearing nothing but your underwear, NxTop customers and resellers may well view the purchase of Virtual Computer by Citrix with a chill, heart-quickening, “right then, what next”?

Virtual Computer’s free offerings are no longer available, NxTop Enterprise edition gets a modest per user price increase. Support is still available. It is likely any road-map will take a wobble. What is now XenClient Enterprise is one of three client hypervisor versions that are offered by the application delivery leader who was, up until Friday, ‘the investing competition’.

Virtual Computer was a leader in the Type I client hypervisor delivery platform: although to be fair, it wasn’t a big race card.  In comparison to its cousin XenClient, at technical level it had better instance management options, a pre-packaged virtual machine instance with Chrome and Citrix Receiver, far wider hardware support and integrated systray tools within Microsoft Windows VMs. The latest 4.0.6 released earlier this month, continued  a steady improvement in management options for configurations. More importantly for the enterprise – Virtual Computer had the better links than the with hardware manufacturers with a strategy to integrate new hardware releases in weeks rather than months. Perhaps most interestingly, NxTop was highlighted as an solution that strongly aligned with Intel’s Intelligent Desktop Virtualisation (IDV).

VDI too expensive? VDI too remote? Have you considered IDV – manage centrally, run locally?

Yet despite innovation awards, the client-side hypervisor leader found it hard to gain momentum. Talking to CIO/CTOs the technology and you come across a number of obstacles in new accounts. Where does it fit with a BYOD strategy? What advantage does it offer over solutions such as LANDesk, Dells’ KACE or Microsoft’s SCCM? Will it run on a Mac? How does it deliver to my tablet?

The integration time for XenClient Enterprise likely to be 12-18 months. If you’re running  NxTop now, how will that impact your roll-out or continued delivery?  If you dismissed XenClient and went XenDesktop – should you stop? How could Citrix accommodate a product that can be pitched directly against XenDesktop and VDI-in-a-box? Why and will Citrix embrace IDV?

Read More

The See-Saw Effect: To Scale-up or Scale-out

They say history tends to repeat itself, I am going to take that statement in another direction and apply that towards technology.  Virtualization Technology Practices and Tendencies tend to flip flop over time. That in itself is a pretty general statement but I saw this video on YouTube 16 Core Processor: Upgrade from AMD Opteron 6100 Series to Upcoming “Interlagos”” and this really got me thinking about one of the very first questions presented to the Virtualization Architects when planning and designing a new deployment, for as long as I have been working with virtualization technology.  To scale up or scale out, that is the question and philosophy that has flip flopped back and forth as the technology itself has improved and functionality increased.

When I first started in virtualization the processors were only single core and vCenter was not even an option yet to manage and/or control the virtual infrastructure. At the start, any server that was on the HAL would be great to get started and then VMware came out with Symmetric Multiprocessing (SMP) virtual machines, with single or dual virtual CPUs. This was great news and changed the design thought process with the new idea of getting the biggest host server with as many processors and as much memory that you could get and/or afford.

Technology then made an advance with the introduction of multi-core processors and now you could buy smaller boxes that still had the processing power of the bigger hosts but in a much smaller and cheaper package. As the technology changed the idea to scale-out seemed to overtake the idea of scale up, at least until the next advancement happened from VMware and/or the CPU manufacturers creating a see-saw effect back and forth between the two different areas of technology.

The see-saw will go back and forth over the years and if we fast forward to today we have a lot of exciting technologies that have been added to the mix.  The introduction of blade servers a few years back was one of those key technology moments that helped redefine the future of server computing.  Now, blade technology has taken a another big step with the release Cisco’s Unified Computing System (UCS).  UCS has now taken the blade technology and turned it into the first completely stateless computing technology which currently is able to hold more memory than any other blade system and gives you the ability to run two quad-core processors in the half height blades and the four quad-core processors in the full height blade.  Intel has invested time and money in the UCS platform and will remain the only processor available in the UCS chassis but as much as things have flip-flopped with the scale-up and scale-out question, the competition between AMD and Intel has been an exciting race with several back and forth’s going on between the two companies.  With the video of AMD’s sixteen core processor making its way around the internet it is a safe bet to think that Intel’s equivalent or even better might not be that far behind.

Where do you think we are in the scale-up and scale-out question?  In my opinion, I believe the scale-out option is the best way to go.  As virtualization has been accepted as the way to move forward in the Data Center and more and more mission critical as well as beefier servers are now virtualized the need for 32 or 64 cores available per host becomes more and more prevalent to have the resources available for the next advancement that comes in play.  Also to support the scale-out opinion it is worth considering VMware’s High Availability (HA) when deciding the number of virtual machines per host.  In my years of designing systems and given the choice, I would want HA to be able to recover from a host failure in less than five minutes from the time the host goes down and all the virtual machines running on that host have been restarted and fully booted up.  When you have too many virtual machines per host the recovery time during a host failure and the boot storm that comes with it tends to be dramatic and extreme.

That is my opinion and thoughts on the scale-up and scale-out question, so now let’s hear your thoughts and ideas to share with the class.

TPM/TXT Redux

On the third Virtualization Security Podcast of 2011 we were joined by Charlton Barreto of Intel to further discuss the possibility of using TPM/TXT to enhance security within the virtual and cloud environments. We are not there yet, but we discussed in depth the issues with bringing hardware based integrity and confidentiality up further into the virtualized layers of the cloud. TPM and TXT currently provide the following per host security:

Read More