Tag Archives: Intel

And then there were three – NxTop Enterprise morphs to XenClient Enterprise

DesktopVirtualizationLike waking up from a scene in a night’s dream where you were on a lovely walk, to find yourself stood outside of your now locked hotel room wearing nothing but your underwear, NxTop customers and resellers may well view the purchase of Virtual Computer by Citrix with a chill, heart-quickening, “right then, what next”?

Virtual Computer’s free offerings are no longer available, NxTop Enterprise edition gets a modest per user price increase. Support is still available. It is likely any road-map will take a wobble. What is now XenClient Enterprise is one of three client hypervisor versions that are offered by the application delivery leader who was, up until Friday, ‘the investing competition’.

Virtual Computer was a leader in the Type I client hypervisor delivery platform: although to be fair, it wasn’t a big race card.  In comparison to its cousin XenClient, at technical level it had better instance management options, a pre-packaged virtual machine instance with Chrome and Citrix Receiver, far wider hardware support and integrated systray tools within Microsoft Windows VMs. The latest 4.0.6 released earlier this month, continued  a steady improvement in management options for configurations. More importantly for the enterprise – Virtual Computer had the better links than the with hardware manufacturers with a strategy to integrate new hardware releases in weeks rather than months. Perhaps most interestingly, NxTop was highlighted as an solution that strongly aligned with Intel’s Intelligent Desktop Virtualisation (IDV).

VDI too expensive? VDI too remote? Have you considered IDV – manage centrally, run locally?

Yet despite innovation awards, the client-side hypervisor leader found it hard to gain momentum. Talking to CIO/CTOs the technology and you come across a number of obstacles in new accounts. Where does it fit with a BYOD strategy? What advantage does it offer over solutions such as LANDesk, Dells’ KACE or Microsoft’s SCCM? Will it run on a Mac? How does it deliver to my tablet?

The integration time for XenClient Enterprise likely to be 12-18 months. If you’re running  NxTop now, how will that impact your roll-out or continued delivery?  If you dismissed XenClient and went XenDesktop – should you stop? How could Citrix accommodate a product that can be pitched directly against XenDesktop and VDI-in-a-box? Why and will Citrix embrace IDV?

Continue reading And then there were three – NxTop Enterprise morphs to XenClient Enterprise

The See-Saw Effect: To Scale-up or Scale-out

They say history tends to repeat itself, I am going to take that statement in another direction and apply that towards technology.  Virtualization Technology Practices and Tendencies tend to flip flop over time. That in itself is a pretty general statement but I saw this video on YouTube 16 Core Processor: Upgrade from AMD Opteron 6100 Series to Upcoming “Interlagos”” and this really got me thinking about one of the very first questions presented to the Virtualization Architects when planning and designing a new deployment, for as long as I have been working with virtualization technology.  To scale up or scale out, that is the question and philosophy that has flip flopped back and forth as the technology itself has improved and functionality increased.

When I first started in virtualization the processors were only single core and vCenter was not even an option yet to manage and/or control the virtual infrastructure. At the start, any server that was on the HAL would be great to get started and then VMware came out with Symmetric Multiprocessing (SMP) virtual machines, with single or dual virtual CPUs. This was great news and changed the design thought process with the new idea of getting the biggest host server with as many processors and as much memory that you could get and/or afford.

Technology then made an advance with the introduction of multi-core processors and now you could buy smaller boxes that still had the processing power of the bigger hosts but in a much smaller and cheaper package. As the technology changed the idea to scale-out seemed to overtake the idea of scale up, at least until the next advancement happened from VMware and/or the CPU manufacturers creating a see-saw effect back and forth between the two different areas of technology.

The see-saw will go back and forth over the years and if we fast forward to today we have a lot of exciting technologies that have been added to the mix.  The introduction of blade servers a few years back was one of those key technology moments that helped redefine the future of server computing.  Now, blade technology has taken a another big step with the release Cisco’s Unified Computing System (UCS).  UCS has now taken the blade technology and turned it into the first completely stateless computing technology which currently is able to hold more memory than any other blade system and gives you the ability to run two quad-core processors in the half height blades and the four quad-core processors in the full height blade.  Intel has invested time and money in the UCS platform and will remain the only processor available in the UCS chassis but as much as things have flip-flopped with the scale-up and scale-out question, the competition between AMD and Intel has been an exciting race with several back and forth’s going on between the two companies.  With the video of AMD’s sixteen core processor making its way around the internet it is a safe bet to think that Intel’s equivalent or even better might not be that far behind.

Where do you think we are in the scale-up and scale-out question?  In my opinion, I believe the scale-out option is the best way to go.  As virtualization has been accepted as the way to move forward in the Data Center and more and more mission critical as well as beefier servers are now virtualized the need for 32 or 64 cores available per host becomes more and more prevalent to have the resources available for the next advancement that comes in play.  Also to support the scale-out opinion it is worth considering VMware’s High Availability (HA) when deciding the number of virtual machines per host.  In my years of designing systems and given the choice, I would want HA to be able to recover from a host failure in less than five minutes from the time the host goes down and all the virtual machines running on that host have been restarted and fully booted up.  When you have too many virtual machines per host the recovery time during a host failure and the boot storm that comes with it tends to be dramatic and extreme.

That is my opinion and thoughts on the scale-up and scale-out question, so now let’s hear your thoughts and ideas to share with the class.

TPM/TXT Redux

On the third Virtualization Security Podcast of 2011 we were joined by Charlton Barreto of Intel to further discuss the possibility of using TPM/TXT to enhance security within the virtual and cloud environments. We are not there yet, but we discussed in depth the issues with bringing hardware based integrity and confidentiality up further into the virtualized layers of the cloud. TPM and TXT currently provide the following per host security: Continue reading TPM/TXT Redux

Intel buys McAfee “for security in the cloud”

In case you missed it, Intel has bought McAfee, a security company best known for virus scanning and other malware detection software, for $7.68Bn (on revenues of about $2Bn).  This is a tidy multiple in any marketplace, particularly as  McAfee is not the dominant player.  It is the largest deal Intel has ever done, and the largest pure-play security deal ever.  Plus the deal was in cash.

Add to this the Intel plan to purchase the Wireless Solution unit of Infineon (for $1.4Bn) and you now have the direction in which Intel plans to go. More Security in the hardware.

The technical rationale behind the deal seems to be that security should be going into hardware, and that in newer cloud access devices (Android, iPad etc) it wont’ be a bolt-on extra like it is at the moment, it’ll be something that OEMs could buy from Intel.  The same argument applies to the clouds themselves.  Servers would come with embedded security. We’ve been discussing this stack/hardware boundary a little at the virtualization practice – it features on our recent podcast, Virtual Thoughts: Is the Hypervisor moving into Hardware?. However, our perception had been that the stack/hardware boundary was being driven by the VCE coalition (VMware, Cisco, EMC) and potentially by HP and even Dell, but not by the semiconductor manufacturers. Continue reading Intel buys McAfee “for security in the cloud”

Virtual Thoughts: Is the Hypervisor moving into Hardware?

During the Virtual Thoughts podcast on 6/29/2010, the analysts discussed various hardware aspects of virtualization trying to determine if the hypervisor was to move into the hardware? and if so how much of it? as well as whose hypervisor? and lastly such a move part of any business model?

Virtual Thoughts is a monthly podcast that looks at the entire scope of virtualization to discuss new trends and thoughts within the virtualization and cloud communities. Continue reading Virtual Thoughts: Is the Hypervisor moving into Hardware?

A Rising Tide Lifts all Virtual Boats

John F. Kennedy, the 35th President of the United States once said that “A rising tide lifts all boats”. This was in reference to a strong economic growth, which benefits most if not all participants in that economy. While the entire economy has not be exceptionally strong of late, companies related to the virtualization technology sector have been chalking up increasingly strong results.  For Q1/2010 here are some recent revenue and earnings reports: Continue reading A Rising Tide Lifts all Virtual Boats