Both Microsoft and VMware have revamped their product suites, and therefore their licensing, once more. As always, how you buy will dictate how you license. It has taken a bit of time for all the revamped information to percolate through to each corporate site and for all the issues to be addressed. As we did before, let us look at licensing. We will look first at the old model of Hyper-V vs VMware vSphere vs Citrix Xen vs RedHat KVM. Then, in a follow-on article, we will look at the new cloud suite models.
Articles Tagged with vSphere
VMworld 2012 San Francisco is over, and I have some time to reflect on my virtualization thoughts in general before getting ready for VMworld Barcelona. One thing I noticed is recent announcements about VMware vSphere 5.1 and Microsoft Hyper-V 2012. Microsoft and VMware both released a specific new feature to each respective platform at basically the same time. Is this a sign that Microsoft is really closing the gap on VMware? I think we are getting there, but I have also made some other personal observations on how I see virtualization in the future, and I foresee a completely different method and mindset for the future between these two companies.
In Open Source it’s impossible to keep a secret (and in any case Anti-Trust laws make it very risky). And despite the imminence of VMworld, the governance processes of OpenStack run to their own timetable, so some interesting news about VMWare was made public on Sunday 26th August – the day before VMWorld – that VMware joins OpenStack.
While not a major version release (we will have to wait for 6.0 next year for that), the new 5.1 version of the VMware products contains some significant new functionality, in addition to the packing of all of the components into the vCloud Suite.
New Features in vSphere 5.1
- User Access – There is no longer a dependency on a shared root account. Local users assigned administrative privileges automatically get full shell access
- Auditing – All host activity from both the shell and the Direct Console User Interface is now logged under the account of the logged in user
- Monitoring – Support is added for SNMPv3. The SNMP agent has been unbundled from the VMkernel and can now be independently updated.
- vMotion – a vMotion and an Storage vMotion can be combined into one operation. This allows a VM to be moved between two hosts or clusters that do not have any shared storage.
- New Windows Support – Support for both the Desktop and Server Editions of Windows 8/2012
- Hardware Accelerated 3D Graphics – Teaming up with NVIDIA, vSphere can now map a vGPU to each VM on a system. Not only does this feature accelerate 3D graphics but provides a GPU for high performance computing needs
- Improvements in Virtual hardware virtualization support – This brings Intel-VT/AMD RVI features further into the virtual machine which will improve virtualization within virtualization. In addition, more low level CPU counters are exposed which can be further used for high performance computing and real time style applications.
- Agentless Antivirus and Antimalware – vShield Endpoint is now included in vSphere 5.1 and offloads anti-virus and antimalware processing inside virtual machines to a secure dedicated virtual appliance delivered by VMware partners. This change lowers the cost of entry for Agentless Angivirus and Malware.
- New 64-vCPU Support – Virtual machines running on a vSphere 5.1 host can be configured with up to 64 virtual CPU’s and 1TB of RAM.
- Auto-Deploy – Auto-Deploy is extended with two new modes, “stateless caching” and “stateful installs”. In addition the number of concurrent reboots per Auto-Deploy host has been increased to 80
- SR-IOV Support – Single Root I/O Virtualization allows certain Intel NIC’s to transfer data directly into the memory space of a virtual machine without any involvement from the hypervisor. See this Intel Video
- Space Reclaiming Thin Provisioned Disks – These types of disks add the ability to reclaim deleted blocks from existing thin provisioned disks while the VM is running. To reclaim space is a two-part function of first wiping the disk marking unused blocks as free, and then to shrink the disk. These two features have been a part of VMware Tools for a number of years but now do things differently for thin provisioned disks. The underlying hardware is not initially part a part of the reclamation process. Instead the vSCSI layer within ESX reorganizes unused blocks to keep the used part of the thin provisioned disk contiguous. Once the unused parts are at the end of the thin provisioned disk then the hardware is involved.
- Tunable Block Size – Normally thin provisioned disks use a 4KB block size that is unchanging, however, this block size can be tuned indirectly as it is now based on the requirements of the underlying storage array. There is no method to tune this by hand.
- All Paths Down Improvements – When there was an all paths down (APD) situation, the vSphere management service would hang waiting on disk IO, which would cause the vSphere host to inadvertently disconnect from vCenter and in effect become unmanageable. APD handling has been improved such that transient APD events will not cause the vSphere management service to hang waiting on disk IO, use of vSphere HA to move workloads around to other hosts if APD detects a permanent device lost (PDL) situation, and implement a way to detect PDL for iSCSI arrays that present only one LUN.
- Storage Hardware/Software improvements – These improvements include the ability to boot from software FCoE, additions of Jumbo frame support for all iSCSI adapters (software or hardware), and support for 16Gb FC
- VAAI Improvements – VAAI has added support to allow vCloud Director fast-provisioned vApps to make use of VAAI enabled NAS array-based snapshots.
- vSphere S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Implementation – vSphere has implemented SMART reporting via the esxcli commands so that SSD and other disks can report back on their status. In addition, esxcli has been upgraded to include ways to reset specific FC adapters directly as well as methods to retrieve event caching information such as link-up and link-down.
- Storage IO Contral Statistics and Settings Improvements – Finding the proper value for SIOC has been problematic, now it is possible to set a percentage instead of a millisecond value to determine when SIOC should fire. In addition, SIOC will report stats immediately instead of waiting. This allows Storage DRS has statistics available immediately, which improve its decision process. In addition, the observed latency of a VM (a new metric) is available within the vSphere Client performance charts. The observed latency is latency within the host and not just latency after storage packets leave the host
- Storage DRS Improvements – Storage DRS has been improved for workloads using vCloud Directory. Linked clones can now be migrated between datastores if there exists either the base disk or a shadow copy of the base disk. Storage DRS is also used now for initial placement of workloads when using vCloud Director.
- Improvements in Datastore Correlation for Non-VASA enabled arrays – For storage devices that do not support VASA it is difficult to correlate datastores against disk spindles on an array. There are now improvements in the datastore correlation such that vSphere can now detect if spindles are shared by datastores on the array regardless of VASA support.
By exposing virtual hardware (Intel-VT/AMD RVI) virtualization as well as more CPU counters and components VMware has exposed more capability than ever before. Tie this with virtual graphics processing units and we now have the ability to implement virtualized high performance and real-time computing environments. Add to this the storage improvements and large scale big data applications as well as high performance computing environments can be virtualized. Both require low latency networking and storage.
Virtualization has long since been the bane of high performance applications whether that is 3D graphics, high performance computing, big data, or real-time applications. vSphere 5.1 provides a possible solution to use cases while improving integration with the VMware vCloud Suite.
While looking around the web for anything new with virtualization, I kept seeing more and more posts and articles about the new type of virtual hypervisor. Type 0, now this sounds interesting and I found these definitions for each type of hypervisor.
At VMworld 2011, VMware presented a chart that showed their progress in terms of virtualizing various workloads in their own customer base. The chart (shown below) demonstrated that VMware had made some really good progress with some really hard workloads, and mostly excellent progress with easy workloads (low hanging fruit). The interesting question is what is the best way to proceed from here on out.