VMware buying Virsto is a big move and after considerable discussion a logical step for VMware in many technical areas as well. We previously mentioned that Virsto would add to VMware’s existing in Software Defined Data Center (SDDC), but there is more to this than just SDDC, which I believe is the end goal. Getting there absolutely requires a storage abstraction layer. So what does VMware gain other than SDDC with Virsto.
Articles Tagged with vSphere
By Greg Schulz, Server and StorageIO @storageio
Unless you are one of the few who have gone all solid-state devices (SSDs) for your virtual environment, hard disk drives (HHDs) still have a role. That role might be for primary storage of your VMs and/or their data, or as a destination target for backups, snapshots, archiving or as a work and scratch area. Or perhaps you have some HDDs as part of a virtual storage appliance (VSA), storage virtualization, virtual storage or storage hypervisor configuration. Even if you have gone all SSD for your primary storage, you might be using disk as a target for backups complimenting or replacing tape and clouds. On the other hand, maybe you have a mix of HDD and SSD for production, what are you doing with your test, development or lab systems, both at work and at home.
Despite the myth of being dead or having been replaced by SSDs (granted their role is changing), HDD as a technology continues to evolve in many areas.
General storage characteristics include:
- Internal or external to a server or, dedicated or shared with others
- Performance in bandwidth, activity, or IOPS and response time or latency
- Availability and reliability, including data protection and redundant components
- Capacity or space for saving data on a storage medium
- Energy and economic attributes for a given configuration
- Functionality and additional capabilities beyond read/write or storing data
Capacity is increasing in terms of aerial density (amount of data stored in a given amount of space on HDD platters, as well as number of platters stacked into a given form factor. Today there are two primary form factors for HDDs as well as SSDs (excluding PCIe cards) which are 3.5” and 2.5” small form factor (SFF) widths available in various heights.
On the left is a 2.5” 1.5TB Seagate Freeplay HDD with a USB or eSATA connection that I use for removable media. On the right, a couple of 3.5” 7200 HDDs of various capacities size, in the center back, an older early generation Seagate Barracuda. In the middle, a stack of HDD, HHDD and SSD 2.5” devices including thin 7mm, 9mm and thick 15mm heights. Note that thick and thin refer to the height of the device as opposed to thin or thick provisioned.
In addition to form factor, capacity increases and cost reductions, other improvements include reliability in terms of mean time between failure (MTBF) and annual failure rate (AFR). There have also been some performance enhancements across the various types of HDDs, along with energy efficiency and effectiveness improvements. Functionality has also been enhanced with features such as self-encrypting disks (SEDs) or full disk encryption (FDE).
Data is accessed on the disk storage device by a physical and a logical address, sometimes known as a physical block number (PBN) and a logical block number (LBN). The file system or an application performing direct (raw) I/O keeps track of what storage is mapped to which logical blocks on what storage volumes. Within the storage controller and disk drive, a mapping table is maintained to associate logical blocks with physical block locations on the disk or other medium such as tape.
When data is written to disk, regardless of whether it is an object, file, Web database, or video, the lowest common denominator is a block of storage. Blocks of storage have been traditionally organized into 512-bytes, which aligned with memory page sizes. While 512-byte blocks and memory page sizes are still common, given larger-capacity disk drives as well as larger storage systems, 4KB (e.g., 8 × 512 bytes or 4,096 bytes) block sizes are appearing called Advanced Format (AF). The transition to Advanced Format (AF) 4KB is occurring over time with some HDDs and SSDs supporting it now along with emulating 512-byte sectors. As part of the migration to AF, some drives have the ability of doing alignment work in the background off-loading server or external software requirements. Also related to HDD drive size are optional format sizes such as 528 byte used by some operating systems or storage systems.
Larger block sizes enable more data to be managed or kept track of in the same footprint by requiring fewer pointers or directory entries. For example, using a 4KB block size, eight times the amount of data can be kept track of by operating systems or storage controllers in the same footprint. Another benefit is that with data access patterns changing along with larger I/O operations, 4KB makes for more efficient operations than the equivalent 8 × 512 byte operations for the same amount of data to be moved.
At another detailed layer, the disk drive or flash solid-state device also handles bad block vectoring or replacement transparently to the storage controller or operating system. Note that this form or level of bad block repair is independent of upper-level data protection and availability features, including RAID, backup/restore, replication, snapshots, or continuous data protection (CDP), among others.
There are also features to optimize HDDs for working with RAID systems, or for doing for file copies such as for use with cloud and object storage systems. Some HDDs are optimized for start/stop operations found in laptops along with vibration damping, while others support continuous operation modes. Other features include energy management with spin down to conserve power, along with intelligent power management (IPM) to vary the performance and amount of energy used.
In addition to drive capacity sizes that range up to 4TB on larger 3.5” form factor HDDs, there are also different sizes of DRAM buffers (measured in Mbytes) available on HDDs. Hybrid HDD (HHDDs) in addition to having DRAM buffers also have SLC or MLC nand flash measured in GBytes for even larger buffers as either read, or read/write. For example the HHDDs that I have in some of my laptops as well as VMware ESXi servers have 4GB SLC for a 500GB 7,200 RPM device (Seagate Momentus XT I) or 750GB with 8GB SLC (Seagate Momentus XT II) and are optimized for reads. In the case of a HHDD in my ESXi server, I used this trick I learned from Duncan Epping to make a Momentus XT appear to VMware as a SSD. Other performance optimization options include native command queuing, and target mode addressing which in turns gets mapped into for example VMware device mappings (e.g. vmhba0:C0:T1:L0).
Other options for HDDs include speed with 5,400 (5.4K) revolutions per minute (RPM) being at the low end, and 15,000 (15K) RPMs at the high-end with 7,2K and 10K speeds also being available. Interfaces for HDDs include SAS, SATA and Fibre Channel (FC) operating at various speeds. If you look or shop around, you might find some parallel ATA or PATA devise still available should you need them for use or nostalgia. FC HDDs operate at 4G where SAS and SATA devices can operate at up to 6Gb with 3Gb and 1.5Gb backwards compatibility. Note that if supported with applicable adapters, controllers and enclosures, SAS can also operate in wide modes. Check out SAS SANs for Dummies to learn more about SAS, which also supports attachment of SATA devices.
Ok, did you catch that I did not mention USB or iSCSI HDDs? Nope, that was not a typo in that while you can get packaged HDDs or SSDs with USB, iSCSI, Firewire or Thunderbolt attachments, they utilize either a SAS or SATA HDD. Inside the packaging will be a bridge or gateway card or adapter that converts from for example SATA to USB. In addition to packaging, converters are also available as docking stations, enclosures or cables. For example, I have some Seagate GoFlex USB to SATA and eSATA to SATA cables for attaching different devices as needed to various systems.
Besides drive size (form factor) and space capacity, interface and speed, along with features, there are some other differences which are enterprise class (both high performance and high capacity) along with desktop and laptop, internal and external use. These drives can be available via OEMs (server and storage vendors) or systems integrators with their own special firmware or as generic devices. What this means is that not all SATA or SAS HDDs are the same from enterprise to desktop across both 2.5” and 3.5” form factors. Even the HDDs that you can buy for example from Amazon will vary based on the above and other factors.
So which HDD is best for your needs?
That will depend on what you need or want to do among other criteria that we will look at in a follow-up post.
Ok, nuff said for now.
Windows 2012 Hyper-V is the hypervisor for the cloud. VMware’s vSphere is a dead man walking?
In parts One and Two I shared a chunk of what I learned from Aidan Finn‘s enlightening and entertaining session “Windows Server 2012 Hyper-V & VSphere 5.1 – Death Match” delivered at the E2E Virtulisation Conference in Hamburg. We’ve considered pricing, scalability and performance, as well as storage then gone on to consider resource management, security and multi-tenancy and what a flexible infrastructure can give.
Some have found this a useful comparison. Others have highlighted that this isn’t a feature-by-feature comparison and that if it was, the tables would be very different: they would, they’d be longer for a start. But more importantly, would they give the high view that many are focused on? Is the goal a technical Top Trump victory, or alignment to business goals? If aligned, how aligned? A friend used to often quote the difference between cabinet making, carpentry and joinery is effort and measurement: they each had their place, the trick was knowing what level to apply.
In Part III, lets question further Aidan’s premise that Hyper-V kills vSphere. Here we’ll consider High Availability and Resiliency.
Windows 2012 Hyper-V is the hypervisor for the cloud, VMware’s vSphere is a dead man walking?
In Part I I shared a chunk of what I learned from Aidan Finn‘s enlightening and entertaining session delivered at the E2E Virtulisation Conference in Hamburg tastefully titled, “Windows Server 2012 Hyper-V & VSphere 5.1 – Death Match”. In Part I we looked at pricing, scalability and performance, as well as storage in questioning how bold this statement was.
Pure license-cost wise, it more straightforward to run Microsoft Hyper-V than add another licensed hypervisor: note that Hyper-V does have a free offering (although this version doesn’t cover the virtual Windows Server instance licenses). We showed that scalability wise, Hyper-V can better common competition. Storage-wise Hyper-V, as should be expected from the newest offering, supports the newest technology: 4k sector sizes, and had the largest virtual disk support. Still, if you needed greater than 2TB of storage, you could always join multiple 2TB instances together, or bypass limits by mapping a LUN direct to the VM.
Still, besides pricing simplicity, performance improvements, and updated storage what has Microsoft done for the latest version of Hyper-V? In Part II, lets question further Aidan’s premise that Hyper-V kills vSphere.
Windows 2012 Hyper-V is the hypervisor for the cloud, and VMware’s vSphere is a dead man walking. So declared Aidan Finn at a recent virtualization conference in Hamburg during an enlightening entertaining session which he tastefully entitled, “Windows Server 2012 Hyper-V & VSphere 5.1 – Death Match”.
A bold statement? Hyper-V has often been cited as a “nearly ran”; good enough for the SMB space and smaller Private Clouds, but lacking the muscle for a cloud-focused enterprise. Nice for a visit, wouldn’t want to live there.
A biased statement? Aidan Finn is highly regarded Hyper-v Microsoft Most Valuable Professional and regularly writes on his website about changes and features of the product. In Predicatably Irrational, Dan Ariely dedicates a chapter to the possibility of a fan’s judgement being clouded. And yet, the list of features now available in Windows Hyper-V is compelling. Indeed, back in March we discussed if Microsoft would drive a wedge between VMware and EMC with Windows Server 2012 and Hyper-V.
In terms of embedded services and experience, VMware’s vSphere has a significant place in many organisations’ data centres. Licensing alone is unlikely to change hearts and minds to convert, but what about features?
Can Microsoft claim that Hyper-V is the hypervisor for the cloud? What new features are available in the 2012 release, and how does it now compare to vSphere 5.1. More importantly, will these changes drive wider adoption?
In this first installment, we take a look at pricing, scalability, and performance, as well as storage.
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer market, but will this concept really take off in the corporate world, and really, should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall footprint, as well as to be able to take advantage of and use all available compute resources in a physical server, and to have centralized control of the computer, storage, and networking resources.