Storage Networking

Storage Networking focuses upon virtualizing the storage and the SAN while collapsing and simplifying the management and performance of storage. This includes the role of storage virtualization in the Software Defined Data Center, and the comparison of pooled local storage options with traditional network attached storage (NAS) and fiber channel attached storage. Covered vendors include VMware, EMC, IBM, Dell, HP, Cisco, Tintri, and Nutanix.

Hard Disk Drives (HDD) for virtual environments (Part I)

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

Unless you are one of the few who have gone all solid-state devices (SSDs) for your virtual environment, hard disk drives (HHDs) still have a role. That role might be for primary storage of your VMs and/or their data, or as a destination target for backups, snapshots, archiving or as a work and scratch area. Or perhaps you have some HDDs as part of a virtual storage appliance (VSA), storage virtualization, virtual storage or storage hypervisor configuration. Even if you have gone all SSD for your primary storage, you might be using disk as a target for backups complimenting or replacing tape and clouds. On the other hand, maybe you have a mix of HDD and SSD for production, what are you doing with your test, development or lab systems, both at work and at home.

Despite the myth of being dead or having been replaced by SSDs (granted their role is changing), HDD as a technology continues to evolve in many areas.

General storage characteristics include:

  • Internal or external to a server or, dedicated or shared with others
  • Performance in bandwidth, activity, or IOPS and response time or latency
  • Availability and reliability, including data protection and redundant components
  • Capacity or space for saving data on a storage medium
  • Energy and economic attributes for a given configuration
  • Functionality and additional capabilities beyond read/write or storing data

Capacity is increasing in terms of aerial density (amount of data stored in a given amount of space on HDD platters, as well as number of platters stacked into a given form factor. Today there are two primary form factors for HDDs as well as SSDs (excluding PCIe cards) which are 3.5” and 2.5” small form factor (SFF) widths available in various heights.

Mix of Hard Disk Drives
Mix of HDDs size, types and form factors

On the left is a 2.5” 1.5TB Seagate Freeplay HDD with a USB or eSATA connection that I use for removable media. On the right, a couple of 3.5” 7200 HDDs of various capacities size, in the center back, an older early generation Seagate Barracuda. In the middle, a stack of HDD, HHDD and SSD 2.5” devices including thin 7mm, 9mm and thick 15mm heights. Note that thick and thin refer to the height of the device as opposed to thin or thick provisioned.

Hard Disk Drive Sizes
Top thin 7mm, middle 9mm, and bottom 15mm (thick)

In addition to form factor, capacity increases and cost reductions, other improvements include reliability in terms of mean time between failure (MTBF) and annual failure rate (AFR). There have also been some performance enhancements across the various types of HDDs, along with energy efficiency and effectiveness improvements. Functionality has also been enhanced with features such as self-encrypting disks (SEDs) or full disk encryption (FDE).

Data is accessed on the disk storage device by a physical and a logical address, sometimes known as a physical block number (PBN) and a logical block number (LBN). The file system or an application performing direct (raw) I/O keeps track of what storage is mapped to which logical blocks on what storage volumes. Within the storage controller and disk drive, a mapping table is maintained to associate logical blocks with physical block locations on the disk or other medium such as tape.

Hard disk drive storage organization
Hard disk drive storage organization

When data is written to disk, regardless of whether it is an object, file, Web database, or video, the lowest common denominator is a block of storage. Blocks of storage have been traditionally organized into 512-bytes, which aligned with memory page sizes. While 512-byte blocks and memory page sizes are still common, given larger-capacity disk drives as well as larger storage systems, 4KB (e.g., 8 × 512 bytes or 4,096 bytes) block sizes are appearing called Advanced Format (AF). The transition to Advanced Format (AF) 4KB is occurring over time with some HDDs and SSDs supporting it now along with emulating 512-byte sectors. As part of the migration to AF, some drives have the ability of doing alignment work in the background off-loading server or external software requirements. Also related to HDD drive size are optional format sizes such as 528 byte used by some operating systems or storage systems.

Larger block sizes enable more data to be managed or kept track of in the same footprint by requiring fewer pointers or directory entries. For example, using a 4KB block size, eight times the amount of data can be kept track of by operating systems or storage controllers in the same footprint. Another benefit is that with data access patterns changing along with larger I/O operations, 4KB makes for more efficient operations than the equivalent 8 × 512 byte operations for the same amount of data to be moved.

At another detailed layer, the disk drive or flash solid-state device also handles bad block vectoring or replacement transparently to the storage controller or operating system. Note that this form or level of bad block repair is independent of upper-level data protection and availability features, including RAID, backup/restore, replication, snapshots, or continuous data protection (CDP), among others.

There are also features to optimize HDDs for working with RAID systems, or for doing for file copies such as for use with cloud and object storage systems. Some HDDs are optimized for start/stop operations found in laptops along with vibration damping, while others support continuous operation modes. Other features include energy management with spin down to conserve power, along with intelligent power management (IPM) to vary the performance and amount of energy used.

In addition to drive capacity sizes that range up to 4TB on larger 3.5” form factor HDDs, there are also different sizes of DRAM buffers (measured in Mbytes) available on HDDs. Hybrid HDD (HHDDs) in addition to having DRAM buffers also have SLC or MLC nand flash measured in GBytes for even larger buffers as either read, or read/write. For example the HHDDs that I have in some of my laptops as well as VMware ESXi servers have 4GB SLC  for a 500GB 7,200 RPM device (Seagate Momentus XT I) or 750GB with 8GB SLC (Seagate Momentus XT II) and are optimized for reads. In the case of a HHDD in my ESXi server, I used this trick I learned from Duncan Epping to make a Momentus XT appear to VMware as a SSD. Other performance optimization options include native command queuing, and target mode addressing which in turns gets mapped into for example VMware device mappings (e.g. vmhba0:C0:T1:L0).

Stack of Hard Disk Drives
A stack of 2.5” HDDs, HHDDs and SSDs.

Other options for HDDs include speed with 5,400 (5.4K) revolutions per minute (RPM) being at the low end, and 15,000 (15K) RPMs at the high-end with 7,2K and 10K speeds also being available. Interfaces for HDDs include SAS, SATA and Fibre Channel (FC) operating at various speeds. If you look or shop around, you might find some parallel ATA or PATA devise still available should you need them for use or nostalgia. FC HDDs operate at 4G where SAS and SATA devices can operate at up to 6Gb with 3Gb and 1.5Gb backwards compatibility. Note that if supported with applicable adapters, controllers and enclosures, SAS can also operate in wide modes. Check out SAS SANs for Dummies to learn more about SAS, which also supports attachment of SATA devices.

Ok, did you catch that I did not mention USB or iSCSI HDDs? Nope, that was not a typo in that while you can get packaged HDDs or SSDs with USB, iSCSI, Firewire or Thunderbolt attachments, they utilize either a SAS or SATA HDD. Inside the packaging will be a bridge or gateway card or adapter that converts from for example SATA to USB. In addition to packaging, converters are also available as docking stations, enclosures or cables. For example, I have some Seagate GoFlex USB to SATA and eSATA to SATA cables for attaching different devices as needed to various systems.

Hard Disk Drive Cables
Top eSATA to SATA and bottom USB to SATA cable

Besides drive size (form factor) and space capacity, interface and speed, along with features, there are some other differences which are enterprise class (both high performance and high capacity) along with desktop and laptop, internal and external use. These drives can be available via OEMs (server and storage vendors) or systems integrators with their own special firmware or as generic devices. What this means is that not all SATA or SAS HDDs are the same from enterprise to desktop across both 2.5” and 3.5” form factors. Even the HDDs that you can buy for example from Amazon will vary based on the above and other factors.

So which HDD is best for your needs?

That will depend on what you need or want to do among other criteria that we will look at in a follow-up post.

Ok, nuff said for now.

VMware’s Fourth Quarter and 2012 Financial Results

VMware100x30VMware has announced financial results for the fourth quarter of 2012 and for the entire year of 2012. Fourth quarter revenue came in at $1.29B growing 22% over the fourth quarter of 2011. Full year revenue came in at $4.61B also growing 22% over 2011. The full results are detailed in the table below:

Continue reading VMware’s Fourth Quarter and 2012 Financial Results

Are you using or considering implementation of a storage hypervisor?

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

Depending upon what your or somebody else’s definition of a storage hypervisor is, you may or may not be using one or realize it.

If your view of a storage hypervisor is a storage IO optimization technology to address performance and other issues with virtual machines (VMs) and their hypervisors, such as Virsto or ScaleIO along with others, you might be calling those storage hypervisors as opposed to middleware, management tools, drivers, plug-in, shims, accelerators, or optimizers.

On the other hand if you are using a storage solution that supports multi-tenant capabilities such as those from EMC, HP 3PAR, IBM or NetApp (MultiStore), you may or may not be using a storage hypervisor.  If your view of storage hypervisors and storage virtualization is based on hardware that can support third party storage arrays or JBOD shelves then solutions from EMC (VMAX and ATMOS), HDS, IBM and NetApp among others are storage hypervisors given their virtualization abstraction capabilities?

On the other hand, if you are using tin wrapped software on an appliance-based technology that abstracts underlying storage, you might be using a storage hypervisor. Tin wrapped software refers to software that is architecturally independent of underlying hardware and that is made available as an appliance on a physical server with storage. One of the value benefits is that for organizations who have a different set of purchasing or acquisition requirements for software vs. hardware, the software as an appliance may count as hardware. Another benefit of tin wrapped software is the pre-configuration, setup and somebody else does testing or systems integration.

Some tin wrapped software or appliances can architecturally support different vendors underlying storage technology, however for sales and marketing purposes may be limited to single vendor’s solution while others can be more open.  Storage options can include vendor specific, or third party storage systems or JBOD attached via iSCSI, SAS, SCSI_FCP (e.g. Fibre Channel or FC), FCoE, InfiniBand SRP (SCSI Remote Protocol) or NFS and CIFS NAS. There are many examples of tin wrapped software more commonly known as appliances including Actifio, EMC DLm, Vplex, ATMOS and Recoverpoint, FalconStor, Inmage, and NetApp StorageGrid among others. Let us not forget about physical server or virtual machine based volume managers that can also virtualize or abstract underlying storage resources.

What if you are using a virtual storage array (VSA) like VMware vSphere Storage Appliance, or HP (LeftHand), NetApp Edge (Ontap) among many others? Alternatively, how about storage systems that also support virtual machines such as Pivot3, Simplivity or Nutanix among others, then you to may be using a storage hypervisor.

How about software defined storage (SDS) the up and coming buzzword bandwagon description to play off software-defined networking (SDN), would those solutions or services be a storage hypervisor?

Granted storage hypervisor is a newer, trendier, cooler name and term for what some might see as old, tired and boring (e.g. storage virtualization or virtual storage). On the other hand, if storage hypervisor is not new enough, then feel free to jump on the software defined storage (SDS) bandwagon. In other words, what some are calling storage hypervisors have been called storage virtualization (figure 1), virtual storage, and management tools among others by others for several years if not decades now.

Storage Hypervisor: Many Faces

Storage virtualization, virtual storage and storage hypervisors for that matter similar to server hypervisors and virtualization can be implemented in different ways, in various locations. They also provide abstraction of physical resources along with emulation as basic tenants that can then be used for enabling aggregation (consolidation or pooling), agility and flexibility, BC and DR, replication, snapshots and other data services.

Underlying storage based on specific vendor implementation might be heterogeneous (any vendors products) including JBOD or RAID based systems, or homogenous (specific vendors’ products). Physical storage can be internal dedicated direct attached (DAS), external dedicated or shared DAS, along with shared iSCSI, SAS, FC, FCoE or IBA SRP for block based, or NAS file or object accessed with SSD, HDD or in some cases PCIe flash cards, tape and cloud services such as Amazon S3 or Glacier, HP, Rackspace among others.

Functionality can also vary with some solutions being very thin or light relying on leveraging underlying storage systems functionality such as EMC Vplex or they can be rich and deep essentially being able to eliminate or give same capabilities of a traditional storage system (e.g. Actifio, Falconstor, IBM SVC and Datacore among others).

In addition to being implemented in different locations, storage virtualization, or storage hypervisors can be in-band, out of band, or fast path control path. In-band architectures are as their name implies, the solution (e.g. software or appliance or storage system) sit between the applications servers and the underlying storage in the data stream. Most solutions are in-band based with some fast path control path. In-band sits in the data path adding some overhead in exchange for extra functionality. Some vendors might claim that there is no overhead or added latency or performance impact, however I tend to believe those who say or claim minimal or no perceived impact.

Out of band as its name implies relies on an additional appliance and or drivers and agents installed on application servers. Another variation is fast path control path that provides a balance between minimizing impact to the data stream getting involved only when needed, then getting out of the way when not needed. In-band based architectures are more common as they are relatively easier to implement, where fast-path control path have been slower to evolve given dependencies on switches and other technology. Fast-path control path solutions tend to be more focused on adding value and leveraging underlying storage systems capabilities such as EMC Vplex vs. in-band systems that can be used to replace storage systems.

Keep in mind, feature, functionality, and vendor specific capabilities aside and preferences aside, the basic fundamental ability of virtualization is around abstraction and emulation. From those two basic capabilities are built the ability to combine, add more feature and functionalities, and be deployed in different places for various reasons (consolidation, mask hardware complexities, flexibility, interoperability, improved hardware use).  In other words, the basic and fundamental capabilities often discussed as features or benefits of virtualization (server, storage, networking, IO and desktop) tie back abstraction and emulation.

Both server and storage virtualization are moving into life beyond consolidation where the focus expands from consolidation and pooling, to enabling flexibility for different benefits. In the case of storage virtualization a decade ago, a primary value proposition of proponents was around LUN or volume pooling or using commodity hardware. While there have been customer success stories, many have found the increased flexibility of using an abstraction (e.g. virtualization or storage hypervisor) layer across homogeneous (same vendor or make and model) or heterogeneous (different vendors makes and models) storage systems.

This gets back to what was mentioned earlier, if you view of a storage hypervisor or storage virtualization solution is software based then will probably gauge success based on those solutions. On the other hand, if your view of storage virtualization and storage hypervisors is more pragmatic to include homogeneous solutions using either hardware or software, then you will see a different set of success stories.

Here is my point, there is a lot of virtual storage marketing hype around what is or is not a storage hypervisor that is similar to what has occurred in the past for what is or is not storage virtualization. Consequently, depending on your preferences, sphere of influence, or who you listen to or believe, sell or promote what is or is not a storage hypervisor will vary the same as what is or is not storage virtualization.

Now if you are a vendor, var, pundit, surrogate or anybody else who is not feeling comfortable right about now, relax for a moment. Take a deep breath, count to some number (you decide). Then continue reading before you dispatch your truth squads to set me straight for raining on your software defined virtual storage hypervisor parade ;).

There are many forms of storage virtualization, including aggregation or pooling, emulation, and abstraction of different tiers of physical storage providing transparency of physical resources. Storage virtualization can be found in different locations (figure 1) such as server software bases, network, or fabric, using appliances, routers, or blades, with software in switches or switching directors. Storage virtualization (excuse me, storage hypervisor) functionality can also be found running as software on application servers (physical machines or virtual machines) or operating systems, in network based appliances, switchers, or routers, as well as in storage systems.

Various storage virtualization services are implemented in different locations to support various tasks. Storage virtualization functionality includes pooling or aggregation for both block and file-based storage, virtual tape libraries for coexistence and interoperability with existing IT hardware and software resources, global or virtual file systems, transparent data migration of data for technology upgrades, maintenance, and support for high availability, business continuance, and disaster recovery.

Storage virtualization functionalities include:

  • Pooling or aggregation of storage capacity
  • Transparency or abstraction of underlying technologies
  • Agility or flexibility for load balancing and storage tiering
  • Automated data movement or migration for upgrades or consolidation
  • Heterogeneous snapshots and replication on a local or wide area basis
  • Thin and dynamic provisioning across storage tiers
  • And many others

Aggregation and pooling for consolidation of LUNs, file systems, and volume pooling and associated management are intended to increase capacity use and investment protection, including supporting heterogeneous data management across different tiers, categories, and price bands of storage from various vendors. Given the focus on consolidation of storage and other IT resources along with continued technology maturity, more aggregation and pooling solutions can be expected to be deployed as storage virtualization matures.

While aggregation and pooling are growing in popularity in terms of deployment, most current storage virtualization solutions are forms of abstraction. Abstraction and technology transparency include device emulation, interoperability, coexistence, backward compatibility, transitioning to new technology with transparent data movement, and migration and supporting HA, BC, and DR. Some other types of virtualization in the form of abstraction and transparency include heterogeneous data replication or mirroring (local and remote), snapshots, backup, data archiving, security, compliance, and application awareness.

This is not to say that there are not business cases for pooling or aggregating storage, rather that there are other areas where storage virtualization techniques and solutions can be applied. This is not that different from server virtualization expanding from a just around consolidation. The next wave (we are in it now) for server, storage and other forms of virtualization is life beyond consolidation where focus expands enablement and agility in addition to aggregation.

The best type of storage virtualization and the best place to have the functionality will depend on your preferences. The best solution and approach are the ones that enable flexibility, agility, and resiliency to coexist with or complement your environment and adapt to your needs. Your answer might be one that combines multiple approaches, as long as the solution that works for you and not the other way around.

How about volume managers and global name spaces or file systems? Again, IMHO yes, after all a common form of storage virtualization is volume managers that abstract physical storage from applications and file systems. This is about where some become as relaxed as a long tail cat next to a rocking chair (Google it if you do not know) when mentioning virtualization let alone storage hypervisor and volume managers, global name space or global file systems. In addition to providing abstraction of different types, categories, and vendors’ storage technologies, volume managers can also be used to support aggregation, performance optimization, and most other functions commonly found in storage or virtual storage appliances or systems. For example, volume managers can aggregate multiple types of storage into a single large logical volume group that is subdivided into smaller logical volumes for file systems.

In addition to aggregating physical storage, volume managers can do RAID mirroring or striping for availability and performance. Volume managers also give a layer of abstraction to allow different types of physical storage to be added and removed for maintenance and upgrades without impacting applications or file systems. Common management functions supported by volume managers include storage allocation, provisioning, and data protection operations, such as snapshots and replication all of which vary by specific vendor implementation. File systems, including clustered and distributed systems, can be built on top of or in conjunction with volume managers to support scaling of performance, availability, and capacity.

What about Open Stack swift or other cloud and object storage software, can they qualify as being storage hypervisors? My guess is that if you are looking at things from the perspective of server virtualization, or a VMware or Microsoft Hyper-V, KVM or Xen viewpoint, the cloud and object solutions might not quality for storage hypervisor hype status. However keep in mind that most of these solutions support various vendors underlying hardware (software, servers, storage) while providing an abstraction layer, adding capabilities or complimenting those enabled by the underlying technology. Thus IMHO given what some have stretched to be called or included as storage hypervisors, virtual storage or storage virtualization, sure, the cloud and object solutions easily can qualify. Examples include Open Stack Swift, Bash Riak CS, EMC ATMOS, which is now available as a VSA, as well as supporting external third-party storage, Ceph and Cleversafe among many others based on their ability to abstract, emulate and enable added services.

How about gateways including disk and tape libraries that can support and virtualize different types of local and remote cloud services storage while emulating different devices? Sure. If they are providing some emulation, abstraction and more capabilities, either running on a dedicated server (as an appliance or tin wrapped software), or in a virtual machine or as a VSA, why not. For example the Amazon AWS Cloud gateway, or those from Zadara, EMC Cloud Tiering Appliance, Gladient, Avere, Microsoft Storsimple, Nasuni, or Twinstrata among others, not to mention VTLs and data protection appliances from Actifio, EMC, IBM, HP, Falconstor, Fujitsu and Quantum among others.

What this all means

Keep in mind that there are many different types of storage devices, systems, and solutions for addressing various needs, with varying performance, availability, capacity, energy, and economic characteristics. Likewise different tools, technicalities and approaches for abstracting storage resources for various purposes from emulation to agility to consolidation to interoperability among others.

Rest assured there is plenty of hype around storage hypervisors including what is or is not along with storage virtualization and virtual storage. However, there is also plenty of reality in and around storage hypervisors, virtual storage and storage virtualization all of which can be used for different tasks, implemented in various ways with diverse feature, functionality and architectures.

Storage virtualization (and storage hypervisor) considerations include:

  • What are the various application requirements and needs?
  • Will it be used for consolidation or facilitating IT resource management?
  • What other technologies are  in place or planned?
  • What are the scaling (performance, capacity, availability) needs?
  • Will the point of vendor lock-in be shifting or costs increasing?
  • What are some alternative and applicable approaches?
  • How will a solution scale with stability (performance, availability, capacity)?
  • What is the management tools, along with plug-in and feeds for other tools?
  • What are the hardware and software dependencies, service and support?
  • Who is called and will answer the phone when something breaks?
  • What APIs and interfaces are supported (VAAI, VASA, VADP, ODX, CDMI)
  • Are any drivers or other software required on servers accessing the storage?
  • What does the vendor interoperability certified tested support matrix look like?

The above is not an exhaustive list, rather things to think about and consider which will vary based on your needs, preferences and wants. Take a step back and understand what it is that you need (requirements) vs. what you want (like to have or preferences) to meet different scenarios. Some of the tools and technologies mentioned among their other peers have the flexibility to be used in different ways, however just because something can be stretched or configured to do a task may not mean it is the best for a given situation. In addition, when it comes to storage hypervisors, virtual storage and storage virtualization, look for solutions that work for you, vs. you having to work for them. This means that they should remove or mask complexity instead of adding more layers to manage and take care of.

Storage Hypervisor Wrap-up

In the end, there is virtual storage, storage virtualization, storage hypervisors, hyped storage, along with the growing trend to software define everything as part of playing buzzword bingo.

Wonder if 2013 will be the year of software defined virtual storage hypervisor with cloud (public, private and hybrid) multi-tenant capabilities. In the meantime, welcome to the wide world of storage hypervisors, virtual storage, storage virtualization and related topics, themes, technologies and services for cloud, virtual and data storage networking environments.

Ok, nuff said (for now).

A Management Pack Strategy for VMware vCenter Operations?

VirtualizationManagementIconIncluded in the management announcements made at VMworld Barcelona was the announcement of the EMC Storage Analytics Suite, which was covered in “VMware and Quest Join the Infrastructure Performance Management Party“. The product is interesting, but could this be the start of a management pack strategy for VMware vCenter Operations? Continue reading A Management Pack Strategy for VMware vCenter Operations?

VMworld 2012: Important New Features in vSphere 5.1

VMware100x30While not a major version release (we will have to wait for 6.0 next year for that), the new 5.1 version of the VMware products contains some significant new functionality, in addition to the packing of all of the components into the vCloud Suite.

New Features in vSphere 5.1

  • User Access – There is no longer a dependency on a shared root account. Local users assigned administrative privileges automatically get full shell access
  • Auditing – All host activity from both the shell and the Direct Console User Interface is now logged under the account of the logged in user
  •  Monitoring – Support is added for SNMPv3. The SNMP agent has been unbundled from the VMkernel and can now be independently updated.
  • vMotion – a vMotion and an Storage vMotion can be combined into one operation. This allows a VM to be moved between two hosts or clusters that do not have any shared storage.
  • New Windows Support – Support for both the Desktop and Server Editions of Windows 8/2012
  • Hardware Accelerated 3D Graphics – Teaming up with NVIDIA, vSphere can now map a vGPU to each VM on a system. Not only does this feature accelerate 3D graphics but provides a GPU for high performance computing needs
  • Improvements in Virtual hardware virtualization support – This brings Intel-VT/AMD RVI features further into the virtual machine which will improve virtualization within virtualization. In addition, more low level CPU counters are exposed which can be further used for high performance computing and real time style applications.
  • Agentless Antivirus and Antimalware – vShield Endpoint is now included in vSphere 5.1 and offloads anti-virus and antimalware processing inside virtual machines to a secure dedicated virtual appliance delivered by VMware partners. This change lowers the cost of entry for Agentless Angivirus and Malware.
  • New 64-vCPU Support – Virtual machines running on a vSphere 5.1 host can be configured with up to 64 virtual CPU’s and 1TB of RAM.
  • Auto-Deploy – Auto-Deploy is extended with two new modes, “stateless caching” and “stateful installs”. In addition the number of concurrent reboots per Auto-Deploy host has been increased to 80
  • SR-IOV Support – Single Root I/O Virtualization allows certain Intel NIC’s to transfer data directly into the memory space of a virtual machine without any involvement from the hypervisor. See this Intel Video
  • Space Reclaiming Thin Provisioned Disks – These types of disks add the ability to reclaim deleted blocks from existing thin provisioned disks while the VM is running. To reclaim space is a two-part function of first wiping the disk marking unused blocks as free, and then to shrink the disk. These two features have been a part of VMware Tools for a number of years but now do things differently for thin provisioned disks. The underlying hardware is not initially part a part of the reclamation process. Instead the vSCSI layer within ESX reorganizes unused blocks to keep the used part of the thin provisioned disk contiguous. Once the unused parts are at the end of the thin provisioned disk then the hardware is involved.
  • Tunable Block Size – Normally thin provisioned disks use a 4KB block size that is unchanging, however, this block size can be tuned indirectly as it is now based on the requirements of the underlying storage array. There is no method to tune this by hand.
  • All Paths Down Improvements – When there was an all paths down (APD) situation, the vSphere management service would hang waiting on disk IO, which would cause the vSphere host to inadvertently disconnect from vCenter and in effect become unmanageable. APD handling has been improved such that transient APD events will not cause the vSphere management service to hang waiting on disk IO, use of vSphere HA to move workloads around to other hosts if APD detects a permanent device lost (PDL) situation, and implement a way to detect PDL for iSCSI arrays that present only one LUN.
  • Storage Hardware/Software improvements – These improvements include the ability to boot from software FCoE, additions of Jumbo frame support for all iSCSI adapters (software or hardware), and support for 16Gb FC
  • VAAI Improvements – VAAI has added support to allow vCloud Director fast-provisioned vApps to make use of VAAI enabled NAS array-based snapshots.
  • vSphere S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Implementation – vSphere has implemented SMART reporting via the esxcli commands so that SSD and other disks can report back on their status. In addition, esxcli has been upgraded to include ways to reset specific FC adapters directly as well as methods to retrieve event caching information such as link-up and link-down.
  • Storage IO Contral Statistics and Settings Improvements – Finding the proper value for SIOC has been problematic, now it is possible to set a percentage instead of a millisecond value to determine when SIOC should fire. In addition, SIOC will report stats immediately instead of waiting. This allows Storage DRS has statistics available immediately, which improve its decision process. In addition, the observed latency of a VM (a new metric) is available within the vSphere Client performance charts. The observed latency is latency within the host and not just latency after storage packets leave the host
  • Storage DRS Improvements – Storage DRS has been improved for workloads using vCloud Directory. Linked clones can now be migrated between datastores if there exists either the base disk or a shadow copy of the base disk. Storage DRS is also used now for initial placement of workloads when using vCloud Director.
  • Improvements in Datastore Correlation for Non-VASA enabled arrays – For storage devices that do not support VASA it is difficult to correlate datastores against disk spindles on an array. There are now improvements in the datastore correlation such that vSphere can now detect if spindles are shared by datastores on the array regardless of VASA support.


By exposing virtual hardware (Intel-VT/AMD RVI) virtualization as well as more CPU counters and components VMware has exposed more capability than ever before. Tie this with virtual graphics processing units and we now have the ability to implement virtualized high performance and real-time computing environments. Add to this the storage improvements and large scale big data applications as well as high performance computing environments can be virtualized. Both require low latency networking and storage.

Virtualization has long since been the bane of high performance applications whether that is 3D graphics, high performance computing, big data, or real-time applications. vSphere 5.1 provides a possible solution to use cases while improving integration with the VMware vCloud Suite.