I was tinkering around with XenServer the other day. I know I can hear you saying “is that a thing?” Well, it is, but this is not what I am going to talk about today. Time for a tangent shift. I thought I would have a look for a third-party switch for XenServer, but it seems that XenServer is a third-rate citizen in this space, as there is no Cisco Nexus 1kV available for XenServer, even though Cisco previewed it at Citrix Synergy Barcelona in 2012.
There are a few vSphere features that I really found myself taking for granted until they had enhancements added to their base technology. How about you? Are there any features that you simply don’t think about anymore? You know, ones that just work and have been around and used in best practices for a good while now? Well, for me personally, those features are vMotion and High Availability (HA). Both of these features have been enhanced in vSphere 6.0.
During VMware’s online launch event, the company announced the latest release of its flagship product, vSphere 6.0. This release has a lot of great features and enhancements. In this article, I zero in on one specific enhancement: the evolution of vMotion technology into vDistance technology.
At Storage Field Day 5, PernixData announced new features for its flagship product, PernixData FVP. FVP greatly accelerates storage performance in VMware vSphere environments by leveraging SSD and flash technologies to do read and write caching in a protected, clustered way. Continue reading News: New PernixData FVP Features: NFS, In-Memory Support
After months of beta testing boasting 12,000+ beta testers, VMware has finally thrown its hat into the hyperconverged virtualization space with the GA release of VMware vSphere 5.5 Update 1. Among numerous bug fixes to vSphere, this release contains the GA code for VMware VSAN. VSAN is a shared-nothing clustered storage technology embedded in the vSphere hypervisor, ESXi, that uses collections of local, direct-attached host storage to provide reliable and high-performing storage to a vSphere cluster. It relies on both traditional rotating disk media and modern flash and solid-state drive (SSD) storage, forming clusters of up to 32 ESXi hosts over 1 Gbps or, preferably, 10 Gbps network connections.