Teradici, the developer of the PCoIP® protocol, has announced the release of two updates to their hardware acceleration products that are geared to optimize protocol bandwidth and improve end user experience.
• • 5 Comments
Virtualizing Presentation Virtualization Workloads is increasingly seen as beneficial and more acceptable. As Citrix XenApp customers move into 2013 it is likely they’ll move more physical instances to virtual. To enhance RDS VM workloads with shared storage – Atlantis release ILIO for XenApp, the first solution designed specifically to accelerate provisioning, boot time and application response time
The speed that technology changes are absolutely amazing in that as soon as you buy something, the next faster, bigger model comes out. I think back to around when I started my career and remember a workstation that I was using with a 200MHz processor and I was really thrilled when I got it bumped up to 64MB of ram. Now although the hardware was changing at blazing speeds you used to know you had a three to five year run with the operating system before you had to worry about upgrading and refreshing the operating systems. VMware has been changing the rules the last few years on major releases coming around every two years.
I had a fun day resolving a licensing issue for a client. This one was a little different than I had seen in the past. The cluster is question is an eight node cluster running ESX 3.5. The error message that I received when trying to perform a vMotion was “Unable to migrate from HostA to HostB: Virtual machine has 2 virtual CPUs, but the host only supports 1. The number of virtual CPU’s may be limited by the guest OS selected for the virtual machine or by the licensing for the host.”
I saw a question get posted on twitter that kind of intrigues me a little. The question was pretty straight forward. “How many virtual machines should I be able to run on a host?” That is really a fair question in itself but what I find intriguing is that this is the first question he asks. Is this really the first thing administrators think to ask when designing their environment? After all there is no set formula on how many virtual machines you can run on a host. You can be a little more exact when working with VDI because for the most part all the virtual machines would be set up pretty much the same way and the numbers can be a little more predictable. That would not be the case when working with server virtualization. You are going to have servers all with different configurations and amount of resources provisioned to the virtual machines. This variation is what will change your slot count and the amount of virtual machines you can run on the host.
There has been a lot of noise about a negotiations between VMware and Novell, rumors are that it regards the purchase of the SUSE division, now firstly every thing that follows is pure supposition on my part, I have no insider knowledge. now mike has put forward one argument on why a VMware purchase of Novell SUSE assets make very good corporate sense. However I put another idea into the fray.
When I first started with virtualization, the only option you had at the time was single core processors in the hosts. Scale up or scale out was the hot debatable topic when designing your infrastructure. On one side of the coin the idea was to scale up in that it was best to get a few of the biggest servers you could find and load them up with as much memory and processors that you could fit in the box. The end result were some very expensive servers able to run a lot of virtual machines for its time. The other side of the coin presented the idea that it was better to scale out with more, smaller servers to make up the cluster. I have worked in both type of environments and attitudes over the years and as for me, personally, I aligned myself with the scale out philosophy. The simple reason for aligning with the scale out group was host failure.
VMworld is clearly the largest dedicated virtualization conference, and yet from an Open Source perspective it is slightly disappointing because the VMware ecosystem naturally attracts proprietary software vendors, and also some of the more interesting activities in Open Source are through multi-vendor foundations which do not have the same marketing budgets as vendors themselves.
Nevertheless, there are a number of key Open Source players, and some interesting smaller players, represented at VMworld.
Have you ever considered the best way to plan, design and work with VMware Update Manager (VUM)? In the early days using VMware 3.x when VUM was first released, I would end up installing VUM on the vCenter server itself. After all, that was the recommendation from VMware at the time. I propose that this is no longer the case and I would like to present a list of best practices to follow when working with VMware Update Manager. This list came from VMware, but should only be considered as a guide. Each environment is different and your mileage may / will vary.