There are a few vSphere features that I really found myself taking for granted until they had enhancements added to their base technology. How about you? Are there any features that you simply don’t think about anymore? You know, ones that just work and have been around and used in best practices for a good while now? Well, for me personally, those features are vMotion and High Availability (HA). Both of these features have been enhanced in vSphere 6.0.
Articles Tagged with vSphere
During VMware’s online launch event, the company announced the latest release of its flagship product, vSphere 6.0. This release has a lot of great features and enhancements. In this article, I zero in on one specific enhancement: the evolution of vMotion technology into vDistance technology.
On October 9, 2014, EMC announced the release of the first fully software-defined data center, using products from the EMC Federated group of companies, these being:
After months of beta testing boasting 12,000+ beta testers, VMware has finally thrown its hat into the hyperconverged virtualization space with the GA release of VMware vSphere 5.5 Update 1. Among numerous bug fixes to vSphere, this release contains the GA code for VMware VSAN. VSAN is a shared-nothing clustered storage technology embedded in the vSphere hypervisor, ESXi, that uses collections of local, direct-attached host storage to provide reliable and high-performing storage to a vSphere cluster. It relies on both traditional rotating disk media and modern flash and solid-state drive (SSD) storage, forming clusters of up to 32 ESXi hosts over 1 Gbps or, preferably, 10 Gbps network connections.
In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster.