In Open Source it’s impossible to keep a secret (and in any case Anti-Trust laws make it very risky). And despite the imminence of VMworld, the governance processes of OpenStack run to their own timetable, so some interesting news about VMWare was made public on Sunday 26th August – the day before VMWorld – that VMware joins OpenStack.
Articles Tagged with vCloud
While not a major version release (we will have to wait for 6.0 next year for that), the new 5.1 version of the VMware products contains some significant new functionality, in addition to the packing of all of the components into the vCloud Suite.
New Features in vSphere 5.1
- User Access – There is no longer a dependency on a shared root account. Local users assigned administrative privileges automatically get full shell access
- Auditing – All host activity from both the shell and the Direct Console User Interface is now logged under the account of the logged in user
- Monitoring – Support is added for SNMPv3. The SNMP agent has been unbundled from the VMkernel and can now be independently updated.
- vMotion – a vMotion and an Storage vMotion can be combined into one operation. This allows a VM to be moved between two hosts or clusters that do not have any shared storage.
- New Windows Support – Support for both the Desktop and Server Editions of Windows 8/2012
- Hardware Accelerated 3D Graphics – Teaming up with NVIDIA, vSphere can now map a vGPU to each VM on a system. Not only does this feature accelerate 3D graphics but provides a GPU for high performance computing needs
- Improvements in Virtual hardware virtualization support – This brings Intel-VT/AMD RVI features further into the virtual machine which will improve virtualization within virtualization. In addition, more low level CPU counters are exposed which can be further used for high performance computing and real time style applications.
- Agentless Antivirus and Antimalware – vShield Endpoint is now included in vSphere 5.1 and offloads anti-virus and antimalware processing inside virtual machines to a secure dedicated virtual appliance delivered by VMware partners. This change lowers the cost of entry for Agentless Angivirus and Malware.
- New 64-vCPU Support – Virtual machines running on a vSphere 5.1 host can be configured with up to 64 virtual CPU’s and 1TB of RAM.
- Auto-Deploy – Auto-Deploy is extended with two new modes, “stateless caching” and “stateful installs”. In addition the number of concurrent reboots per Auto-Deploy host has been increased to 80
- SR-IOV Support – Single Root I/O Virtualization allows certain Intel NIC’s to transfer data directly into the memory space of a virtual machine without any involvement from the hypervisor. See this Intel Video
- Space Reclaiming Thin Provisioned Disks – These types of disks add the ability to reclaim deleted blocks from existing thin provisioned disks while the VM is running. To reclaim space is a two-part function of first wiping the disk marking unused blocks as free, and then to shrink the disk. These two features have been a part of VMware Tools for a number of years but now do things differently for thin provisioned disks. The underlying hardware is not initially part a part of the reclamation process. Instead the vSCSI layer within ESX reorganizes unused blocks to keep the used part of the thin provisioned disk contiguous. Once the unused parts are at the end of the thin provisioned disk then the hardware is involved.
- Tunable Block Size – Normally thin provisioned disks use a 4KB block size that is unchanging, however, this block size can be tuned indirectly as it is now based on the requirements of the underlying storage array. There is no method to tune this by hand.
- All Paths Down Improvements – When there was an all paths down (APD) situation, the vSphere management service would hang waiting on disk IO, which would cause the vSphere host to inadvertently disconnect from vCenter and in effect become unmanageable. APD handling has been improved such that transient APD events will not cause the vSphere management service to hang waiting on disk IO, use of vSphere HA to move workloads around to other hosts if APD detects a permanent device lost (PDL) situation, and implement a way to detect PDL for iSCSI arrays that present only one LUN.
- Storage Hardware/Software improvements – These improvements include the ability to boot from software FCoE, additions of Jumbo frame support for all iSCSI adapters (software or hardware), and support for 16Gb FC
- VAAI Improvements – VAAI has added support to allow vCloud Director fast-provisioned vApps to make use of VAAI enabled NAS array-based snapshots.
- vSphere S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Implementation – vSphere has implemented SMART reporting via the esxcli commands so that SSD and other disks can report back on their status. In addition, esxcli has been upgraded to include ways to reset specific FC adapters directly as well as methods to retrieve event caching information such as link-up and link-down.
- Storage IO Contral Statistics and Settings Improvements – Finding the proper value for SIOC has been problematic, now it is possible to set a percentage instead of a millisecond value to determine when SIOC should fire. In addition, SIOC will report stats immediately instead of waiting. This allows Storage DRS has statistics available immediately, which improve its decision process. In addition, the observed latency of a VM (a new metric) is available within the vSphere Client performance charts. The observed latency is latency within the host and not just latency after storage packets leave the host
- Storage DRS Improvements – Storage DRS has been improved for workloads using vCloud Directory. Linked clones can now be migrated between datastores if there exists either the base disk or a shadow copy of the base disk. Storage DRS is also used now for initial placement of workloads when using vCloud Director.
- Improvements in Datastore Correlation for Non-VASA enabled arrays – For storage devices that do not support VASA it is difficult to correlate datastores against disk spindles on an array. There are now improvements in the datastore correlation such that vSphere can now detect if spindles are shared by datastores on the array regardless of VASA support.
By exposing virtual hardware (Intel-VT/AMD RVI) virtualization as well as more CPU counters and components VMware has exposed more capability than ever before. Tie this with virtual graphics processing units and we now have the ability to implement virtualized high performance and real-time computing environments. Add to this the storage improvements and large scale big data applications as well as high performance computing environments can be virtualized. Both require low latency networking and storage.
Virtualization has long since been the bane of high performance applications whether that is 3D graphics, high performance computing, big data, or real-time applications. vSphere 5.1 provides a possible solution to use cases while improving integration with the VMware vCloud Suite.
As a key part of the new vCloud Suite, vCloud Director 5.1 gets a bunch of new features, and takes on a new role. The new role is that vCD is built into the suite, and is the layer where the cross-cluster capabilities are implemented. Therefore vCD becomes much less of a Cloud Management solution, and much more of a key part of the platform which implements Virtual Data Centers (VDC’s) for customers.
New vCloud Director Functionality
vCloud Director is where a significant part of the new functionality in the vCloud Suite is implemented. The most important features are VXLAN which allows for the creation of Virtual Data Centers that span clusters. VXLAN allows allows for the vMotion of a VM and its associated storage from one cluster to another.
vCloud Director 5.1 (Click to Enlarge)
vCloud Director 5.1 Enhanced Networking and Security Features
vCloud Director 5.1 is also where some dramatically enhanced networking and security features are implemented. Many of the vShield security components which one used to have to purchase separately are now included in vVD.
The depth of the new networking functionality is not to be underestimated. It is clear that VMware embarked upon the Software Defined Networking path long before they acquire Nicira. Some other important details include:
- Integrated Profile Driven Storage
- Integrated Storage DRS
- Integrated Snapshot and Revert
- The aforementioned integration of VXLAN
- The ability for VDC’s to span clusters
- The bundling of the vShield Security components
vCloud Director contains many of the features that make the vCloud Suite compelling. This will likely force (or entice) many VMware customers and prospects to adopt vCD which will simply then serve to justify the price for the entire vCloud Suite.
There has been quite a lot of twitter traffic about the FrankenCloud recently: A cloud with more than one type of hypervisor underneath it. One example, is to build a cloud using Hyper-V three and vSphere, both managed through Microsoft System Center. Another example, is to build a cloud using Hyper-V, KVM, and vSphere all managed through HotLink. But is this a desired cloud topology?
Microsoft threw down the gauntlet today, right at the feet of Amazon’s AWS – launching a revamped PaaS offering, a brand new IaaS offering (run whatever you want in an Azure hosted image), and significant partnerships with ecosystem vendors that will add value to Azure and round out its value with Microsoft Azure customers.
We, here at The Virtualization Practice, are getting ready to have a cloud presence. Since we ‘eat our own dogfood’ with a 100% Virtual Environment, we are gearing up to move some of those workloads into a hybrid cloud. We already use some cloud resources, but now is the time to look at other workloads. Why we are moving to the cloud is three fold: how can we write about various aspects of being a tenant in the cloud, if we are not one; a recent power outage at the grid level; and a upcoming data center move. Two of these reasons are all about business continuity with the first being what we do. While we already have a cloud running within our own environment, it is time to branch out.