What is User Virtualization and is it worth $70 million?

70 million individual dollars can buy you a lot of things.  A 64 metre long super yacht. The services of an NFL linesman for two years. For $70  million you could entice an  English Premier League striker to play for you, but not necessarily score goals. $70  million is 113,000 Apple iPads. If you spent $100 a day, it’d take you nearly 1,950 years to get fritter it away.  Yet despite all these glittering prizes and goals, Goldman Sachs chose to invest their $70million in a chunk of AppSense.

Of all the things they could have invested in, why did choose AppSense? If the future is going to be full of cloud services, virtualised desktops, and mobile devices, why spend a not inconsiderable sum on something that sounds the stuff of science fiction?

What is User Virtualization and is it worth a $70 million dollar investment? Why would you need user virtualization? And indeed what makes AppSense stand out?

Continue reading What is User Virtualization and is it worth $70 million?

Latest SMT design requires Complete Visibility by Admin! Yikes!

I just finished reading, yet another Multi-Tenancy Design/Overview that claims to be secure or trusted. While I will agree that this particular design does cover Availability and some GRC  (Governance, Regulatory, and Compliance) it is severely lacking in Integrity and Confidentiality. The design even went as far as saying the cloud/virtual administrator requires “COMPLETE VISIBILITY.” I was really taken aback by those words. Why does an administrator need ‘COMPLETE VISIBILITY?’ Which leads me to the question is Integrity and Confidentiality possible within any cloud or virtual environment? Or is it purely based on TRUST?

If so this is an appalling state of virtual and cloud environment security. Continue reading Latest SMT design requires Complete Visibility by Admin! Yikes!

Constructing a Best of Breed Alternative to VMware vCenter Operations Enterprise

When VMware announced the three editions of vCenter Operations, VMware sent a couple of very clear messages about how VMware felt that monitoring solutions for vSphere should be constructed. The first message was that VMware views Performance Management and Capacity Management as two sides of the same coin. The second message was that Configuration Management as an essential part of a performance and capacity management solution since so many of the problems are in fact configuration related. The last message was the given the complexity and rate of change in virtualized environments that the interpretation of monitoring data has to be automated with self-learning analytics. Continue reading Constructing a Best of Breed Alternative to VMware vCenter Operations Enterprise

Carrier Grade Cloud Providers – The Benefits/Issues

In July 2009 I wrote an article entitled Cloud Computing Providers — are they content providers or carriers? and in January of 2011 Chuck Hollis wrote an article Verizon To Acquire Terremark — You Shouldn’t Be Surprised. Now with the Terremark acquisition almost complete and RSA Conference 2011 also over, at which I talked to Terremark about the benefits of belonging to Verizon, a picture is starting to emerge. Yes, my predictions in 2009 make sense and still hold forth today, but is there more of an impact than we realize? Continue reading Carrier Grade Cloud Providers – The Benefits/Issues

The See-Saw Effect: To Scale-up or Scale-out

They say history tends to repeat itself, I am going to take that statement in another direction and apply that towards technology.  Virtualization Technology Practices and Tendencies tend to flip flop over time. That in itself is a pretty general statement but I saw this video on YouTube 16 Core Processor: Upgrade from AMD Opteron 6100 Series to Upcoming “Interlagos”” and this really got me thinking about one of the very first questions presented to the Virtualization Architects when planning and designing a new deployment, for as long as I have been working with virtualization technology.  To scale up or scale out, that is the question and philosophy that has flip flopped back and forth as the technology itself has improved and functionality increased.

When I first started in virtualization the processors were only single core and vCenter was not even an option yet to manage and/or control the virtual infrastructure. At the start, any server that was on the HAL would be great to get started and then VMware came out with Symmetric Multiprocessing (SMP) virtual machines, with single or dual virtual CPUs. This was great news and changed the design thought process with the new idea of getting the biggest host server with as many processors and as much memory that you could get and/or afford.

Technology then made an advance with the introduction of multi-core processors and now you could buy smaller boxes that still had the processing power of the bigger hosts but in a much smaller and cheaper package. As the technology changed the idea to scale-out seemed to overtake the idea of scale up, at least until the next advancement happened from VMware and/or the CPU manufacturers creating a see-saw effect back and forth between the two different areas of technology.

The see-saw will go back and forth over the years and if we fast forward to today we have a lot of exciting technologies that have been added to the mix.  The introduction of blade servers a few years back was one of those key technology moments that helped redefine the future of server computing.  Now, blade technology has taken a another big step with the release Cisco’s Unified Computing System (UCS).  UCS has now taken the blade technology and turned it into the first completely stateless computing technology which currently is able to hold more memory than any other blade system and gives you the ability to run two quad-core processors in the half height blades and the four quad-core processors in the full height blade.  Intel has invested time and money in the UCS platform and will remain the only processor available in the UCS chassis but as much as things have flip-flopped with the scale-up and scale-out question, the competition between AMD and Intel has been an exciting race with several back and forth’s going on between the two companies.  With the video of AMD’s sixteen core processor making its way around the internet it is a safe bet to think that Intel’s equivalent or even better might not be that far behind.

Where do you think we are in the scale-up and scale-out question?  In my opinion, I believe the scale-out option is the best way to go.  As virtualization has been accepted as the way to move forward in the Data Center and more and more mission critical as well as beefier servers are now virtualized the need for 32 or 64 cores available per host becomes more and more prevalent to have the resources available for the next advancement that comes in play.  Also to support the scale-out opinion it is worth considering VMware’s High Availability (HA) when deciding the number of virtual machines per host.  In my years of designing systems and given the choice, I would want HA to be able to recover from a host failure in less than five minutes from the time the host goes down and all the virtual machines running on that host have been restarted and fully booted up.  When you have too many virtual machines per host the recovery time during a host failure and the boot storm that comes with it tends to be dramatic and extreme.

That is my opinion and thoughts on the scale-up and scale-out question, so now let’s hear your thoughts and ideas to share with the class.

Cloud Computing News, Resources, and Analysis