So you are a loyal VMware customer. You have licenses for vSphere 4 and you are about 40% virtualized. Based upon the revised vRAM entitlements in the revised vSphere 5 licensing, you think you are going to be OK as you progress through the more demanding business critical purchased and custom developed applications that lie in front of you.
Articles Tagged with Vkernel
In the past, virtualization architects and administrators were told the best way forward is to buy as much fast memory as they could afford as well as standardize on one set of boxes with as many CPUs as they dare use. With vRAM Pool licensing this type of open-ended RAM architecture will change as now I have to consider vRAM pools when I architect new cloud and virtual environments. So let’s look at this from existing virtual environments and then onto new virtual and cloud environments. How much a change will this be to how I architect things today, and how much of a change is there to my existing virtual environments? Is it a better decision to stay at vSphere 4? Or to switch hypervisors entirely?
Yesterday, Simon Bramfit vSphere 5 – Did VMware Misjudge its Licensing Changes? requested a VDI only version of vSphere and yesterday VMware responded with vSphere Desktop which for VDI removes the vRAM Entitlement barrier. I see this as progress and that VMware is listening. Unfortunately, this is for new purchases and you cannot convert existing vSphere licenses into vSphere Desktop licenses.
Existing Virtual Environments
As a delegate for Tech Field Day 6 in Boston, I was introduced to many third party management tools. In the past I have been given briefings as well on various VMware, Hyper-V, and Citrix Xen Management Tools as well. Many of these tools are marketed directly for use by the administrator, but they have the tools can be used by more than the administrator. These tools should be marketed to management, administrators, as well as the network operations center (NOC). The NOC you say, why should they see the details of my environment? The NOC should not, but they should be able to tell when systems are in failure states outside of the hardware. Only a few tools can be used this way today. The sooner administrators get the word of a problem the sooner it can be fixed. The NOC is the one place that centralizes all monitoring whether it is for security or health of your virtual and cloud environments.
As a delegate for Tech Field Day 6 in Boston, I was introduced to several virtualization and performance management tools from vKernel, NetApp, Solarwinds, Embotics, and a company still in stealth mode. With all these tools and products I noticed that each was not integrated into the roles and permissions of the underlying hypervisor management servers such as VMware vCenter, Citrix XenConsole, or Microsoft System Center. This lack of integration implies that a user with one set of authorizations just needs to switch tools to gain a greater or even lesser set of authorizations. This is not a good security posture and in fact could devolve any security to non-existent.
One of the basic tenants of virtualization security is to protect the management components of your virtualization hosts by placing these all important components on a separate network. These components often include management servers such as SCOM, vCenter, XenCenter, VirtManager, etc. as well as the management appliances of your virtualization hosts. In essence, the use of a properly configured, firewalled, and monitored virtualization management network would be the simplest and most effective security measure that can be made to day within any virtual environment. A message shared by Citrix, VMware, myself, and many others.
The problem is that not everything is as black and white as security folks desire. If we implement performance and other management tools, we often need to expose part of our all important virtualization management network to others. But how do we do this safely, securely, with minimal impact to usability? Why do we need to this is also another question. You just have to take one look at the Virtualization ASsessment TOolkit (Vasto) to realize the importance of this security requirement. But the question still exists, how do you implement other necessary tools within your virtual environment without impacting usability? Which we discussed on the May 5th Virtualization Security Podcast.
Monitoring computing infrastructure and applications for capacity, availability, and performance is a business that has been around for a long time – in fact for just about as long as computers have been used for business critical applications (since the mainframe lead era of the 1960’s). Since that time several waves of change have swept through the computer industry, and with each wave of change has come new computing architectures, new applications, requirements for monitoring and new monitoring approaches. Those waves have included mini-computers, personal computers, LAN based file sharing, client/server based computing, Internet (browser) based computing, N-tier SOA based applications, and now include agile development, virtualization, cloud based computing, and the proliferation of mobile based applications.