All posts by Greg Schulz

Greg Schulz is Founder and Sr. Analyst of independent IT advisory and consultancy firm Server and StorageIO (StorageIO). He has worked in IT at an electrical utility, financial services and transportation firms in roles ranging from business applications development to systems management and architecture planning. Greg has also worked for various vendors in addition to an analyst firm before forming StorageIO. Mr. Schulz is author of several books (Cloud and Virtual Data Storage Networking – CRC Press, The Green and Virtual Data Center – CRC Press, Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures – Elsevier), active with social media with his engaging approach and a top ranked blogger. He has a degree in computer science and master’s degree in software engineering. Learn more at www.storageio.com

EMC ViPR V1.1 and SRM V3.0

StorageNetworkingServer and StorageIO, @storageio

The EMC2 Advanced Software Division (ASD) recently announced (more here and here) enhancements to its ViPR (v1.1) software-defined storage management solution, as well as a new version of its Storage Resource Management (SRM) product (v3.0).

First, keep in mind that SRM, in the traditional server and storage world, means systems or storage resource management, aka reporting and monitoring; more recently, some solutions include analytics capabilities. On the other hand, in the VMware context, SRM refers to its Site Recovery Manager, which is for managing data protection activities. Thus, VMware’s SRM falls under the category of data protection management (DPM) (more here), not to be confused with the Microsoft Data Protection Manager (DPM) data protection tool. The preceding is a good example of how different terms or acronyms, even in the same or closely adjacent spaces, can have different meanings; thus, context matters.

SRM, aka Storage Resource Management

Back to storage resource management, which will be the context of the initials “SRM” for the remainder of this post, unless otherwise indicated. Most previous generations of SRM were synonymous with reporting, in many cases with a focus on capacity, while some also looked at performance activity, among other attributes. Some SRM tools have evolved, adding storage resource analysis (SRA) in order to do more than basic static reporting.

The newest version of EMC SRM (v3.0) is more than a unified GUI or user interface (UI) front end across traditional products. With SRM v3.0, EMC has combined previous solutions, including ProSphere, Storage Configuration Advisor, and Watch4net, as part of a new, single back-end and front-end architecture.

Note that SRM v3.0 also supports VMware and ViPR integration along with providing storage resource analysis (i.e., more than just static reporting of storage capacity or performance). In addition to ViPR, SRM 3.0 also supports EMC’s Data Protection Advisor (DPA), which is based on technology acquired several years ago from WySDM. Other SRM 3.0 features include support for third-party storage platforms and systems (NetApp, Hitachi Data Systems, IBM, and HP) to visualize, analyze, and optimize by providing situational awareness.

What About ViPR V1.1?

If you are not familiar with ViPR, read this three-part series of posts that I did in May of 2013, when ViPR was initially announced (Part I, Part II, and Part III). As additional background, read more about ViPR v1.0 in EMC September 2013 General Availability Announcement for ViPR and EMC New VNX MCx, plus ViPR GA, followed by ViPR conversation and discussion via the Spiceworks forum.

Additions to ViPR with v1.1 include a free VMware OVF download for non-production use, along with two data services. The new data services enable access to underlying storage systems via Hadoop Distributed File System (HDFS) and Object Storage, including S3 REST protocols. ViPR also supports EMC Symmetrix Remote Data Facility (SRDF) for replication, as well as EMC VPLEX and EMC RecoverPoint, in addition to third-party storage (NetApp, NetApp Data ONTAP).

ViPR
ViPR (courtesy of http://pulseblog.emc.com)

ViPR remains one of the most misunderstood storage technologies. This confusion is due in part to the software-defined storage focus that is a popular buzz topic in the industry and the term’s many different meanings. By playing into the “software-defined” marketing games, ViPR (along with other vendors’ solutions) tends to be lumped in with many others. Some of these include storage virtualization, virtual storage, storage hypervisors, and storage appliances, among others. These examples tend to be apples to oranges comparisons.

While ViPR does have capabilities similar to those of some of the storage virtualization and software-defined solutions, it also expands into other areas that, for some, might be unknowns. ViPR is more about management. Thus, for those who need or want to use the term “software-defined,” ViPR focuses on software-defined storage management. Perhaps we will start to see more vendors talking about software-defined storage management instead of sticking with the crowd talking about software-defined storage.

ViPR and Virtual Storage Confusion

There is plenty of confusion around EMC ViPR, in part because people are using terminology for it that relates to what they know. In some cases, the confusion is genuine and similar to learning about what anything new is or is not. Likewise, there is the usual chaos and confusion of marketing hype and FUD. That marketing hype and FUD often results ViPR’s being placed in the software-defined storage category.

What is missing or misunderstood in the ViPR conversations is that it is lightweight (unless you turn on optional data services). This differs from traditional storage virtualization and from virtual storage hardware and software solutions, some of which are referred to as storage hypervisors or software-defined. ViPR is a fast-path control path approach, which means that for functions for which data services are not needed, it gets out of the way (i.e., out of the data path). However, some optional data services, such as object storage and HDFS access, are in the data path. Think of it like working with a good manager who gets out of the way most of the time, not adding any overhead; however, when needed, the manager can get involved to help out, adding value.

If you are familiar with VMware’s vSphere, ESXi, and vCenter, and in particular with what they do and how they help each other, you can see that VMware has some similarities to ViPR. For example, with VMware, the hypervisor, (i.e., ESXi), does much of the heavy lifting functioning, while vCenter is there for management, leveraging the underlying hypervisor. Speaking of ViPR and vCenter, there is also a ViPR plugin for vCenter, which you can read more about here and here.

In addition, VMware can offload functions to storage systems that support VAAI, leveraging their capabilities. However, if systems that support VAAI are not present, functions can be handled via VMware. The key here is that ViPR, with its control plane, is focused on managing the underlying storage systems, leveraging their features or, when needed, adding value via data services.

Speaking of storage virtualization, virtual storage, storage hypervisors, and software-defined storage, check out these related posts:

Some Additional Thoughts, Wishlist Items, and Things to Watch For

  • Could ViPR add VAAI emulation capabilities on behalf of storage systems that do not normally provide those functions? On the other hand, would that heavy lifting add any value?
  • Will EMC allow others, including the competition, to add data services plugins to ViPR? At what cost or control would it do so, not to mention which of the various EMC partners or competitors would be likely to do so (or not)?
  • When will we see broader object access both northbound and southbound? Today, with the recently added data services, we have the ability to access from northbound (e.g., from servers) storage as object. On the other hand, how about ViPR accessing other object services, such as AWS S3, or AWS Glacier API (native vs. via S3 storage policies), Microsoft Azure, Rackspace, HP, and many others? In other words, ViPR functions as a unidirectional cloud gateway across public, private, and hybrid environments similar to those solutions offered by others.
  • Will there be additional third-party storage system support beyond those currently supported (e.g. NetApp OnTap)? Will there be support for commodity or white box hardware? Here is a link to the current compatibility matrix document for ViPR.

In addition to reading about what it is or is not, go here and download EMC ViPR (free for non-production use) to install and see for yourself in your own lab or test environment.

I found the basic download, which is an open virtualization format (OVF), to be quick and easy to deploy into my VMware environment. From there, the next step is to configure the product, including optional data services, which you can read more about via the following links.

You can learn more about EMC ViPR in the following documentation:

Ok, ’nuff said (for now).

When and Where to Use NAND Flash SSD for Virtual Servers

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

Keeping in mind that the best server and storage IO is the one that you do not have to do, then second best is that which has the least impact combined with best benefit to an application. This is where SSD, including DRAM- and NAND-flash-based solutions, comes into the conversation for storage performance optimization.

The question is not if, but rather when, where, what, and how much SSD (NAND flash or DRAM) you will have in your environment, either to replace or to complement HDDs. Continue reading When and Where to Use NAND Flash SSD for Virtual Servers

Hard Disk Drives (HDD) for virtual environments (Part IV) from power to warranties

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

Let us pick up where we left off in part III in our look beyond the covers to help answer the question of which is the best HDD to use.

Power and energy

Power consumption will vary based on size and type of HDD, along with different usage. For example, during power-up there is a larger amount of energy being used vs. when the drive is idle (not reading or writing) yet still spinning, or actively reading and writing. With intelligent power management (IPM), inactive drives can go into lower power usage modes with variable performance. IPM includes the ability to vary the amount of power used to level of performance with different steps or levels. This is different from some first generation MAID solutions based on desktop class drives that were either on or off with subsequent performance issues. While an HDD requires power to spin the platters, once those are in motion, less power is required; however, energy is needed for the read write heads and associated electronics.

This leads to a common myth or misperception that HDDs consume a lot of energy because they are spinning. There is energy being used to keep the platters moving, however power is also required for the electronics to manage the drives interface, read write heads and other functions. With IPM leaving the drive spinning or reducing the rotational speed can help save power, so to can disabling or putting into low power mode the associated processors and control electronics.

As a comparison SSD, drives are often touted as not drawing as much energy compared to an HDD, which is true. However, SSDs do in fact consume electricity and get warm as they also have electronics and control processors similar to HDDs. If you do not believe this, put an SSD to work and feel it over time as it heats up. Granted that is an apple to oranges comparisons, however my point is that there is more to energy savings with HDDs than simply focusing on the rotational speeds. Speaking of energy savings, typical enterprise class drives are in the 4 to 8 watts or a fraction of what they were only a few years ago. Notebook, laptop and workstation drives can be in the single watt to a few watts in power usage range. Note that these numbers may be less than what some will talk about when comparing SSD and HDDs, or trying to make a point about HDDs and power consumption. The reason is this is a reduction from where just a few years ago when drives were in the “teens” in terms of watts per drive. For performance or active drives, compare those on a cost per activity per watt such as cost per IOP per watt, for inactive data then cost per capacity per watt can be relevant.

Security

Given the large amount of data that can be stored on an HDD along with compliance and other concerns, drive level security is becoming more common. There are different types of drive level encryption including self-encrypting devices (SEDs) with some supporting FIPS level requirements. Drive level encryption depending on implementation can be used to off-load servers, workstations or storage systems from performing encrypt and decrypt functions.

Space capacity

The space capacity of the drives is determined by the aerial density (how many bits in a given amount of space) per platter, the size of the platter (3.5” are larger than 2.5”) and number of platters. For example at the same aerial density, more bits and bytes exist on a 3.5” vs. 2.5” device, and by adding more platters (along with read/write heads) the resulting taller height drive has even more space capacity. Drive space capacities include 4TB and smaller for 3.5” devices and TB plus sized for various 2.5” form factors. Watch out for “packaging” games where for example a drive is offered as say 4TB that are actually two separate 2TB drives in a common enclosure (no RAID or NAS or anything else).

The super parametric barrier effects keeps being delayed, first with perpendicular recording, now with shingled magnetic recording (SMR) and heat assisted magnetic recording (HAMR) all in the works. The super parametric barrier is the point where data bits can no longer safely (with data integrity) be stored and later used without introducing instability. Watch for more on SMR and HAMR in a later post when we look at new and emerging trends.

Speaking of space capacity, ever wonder where those missing bits and bytes disappeared on a HDD or SSD? First there is how it is measured, meaning decimal or base10 vs. binary base 2 for example one Gigabyte (GB) being one billion bytes vs.  1,024,000,000.00 bytes. These space capacities are before RAID or hypervisor or operating system and file system formatting overhead are added. There is also reserved space for bad block re vectoring which can be thought of as hot spare blocks for when the drive (HDD or SSD) detects something going bad. In addition to the bad block areas, there are also some reserved space that you will not be able to access that is kept for drive management, read/write head alignment and other things.

Speaking of large capacity drives, as mentioned earlier, rebuild operations with RAID configurations can take longer given more data to move. Good news is that some RAID systems or solutions can rebuild a 1TB or 2TB drive as fast as or faster than a 9GB drive from a decade ago. The catch is that there are more drives and they are getting larger with 3TB and 4TB shipping and larger ones in the works. Things you can do to minimize the impact of long rebuild times; include selecting the right type of drive that has better endurance, reliability and availability. This could mean that selecting a lower priced drive up front that is not as reliable could cost you down the road. Configuration including RAID level, number of parity drivers, and software, adapter, controller or storage system with ability to accelerate rebuilds can also make a difference.

Another impact of large capacity drives or large numbers of HDDs in general is how to securely erase them when decommissioning. That is assuming you are securely erasing them or taking other safeguards disposition vs. throwing in the garbage or giving them away. Self-encrypting devices (SEDs) normally associated with security can be part of a solution for some environments. Since SEDs can effectively erase the data stored on those by, removing the enablement key, instead of hours or days, for some environments secure erase can be in minutes or less.

Warranties

There are various warranties on HDDs, those from the manufacture that may be the same as what an OEM or system integrator passes on to their customers. Some HDDs have a manufactures limited warranty of five years while others have shorter terms. Thus while a manufacture may offer a five year warranty, it can be up to the OEM or integrator to pass that warranty on, or in turn provider a shorter duration with different terms or price. Something to think about in terms of HDD warranties is that replacing them can mean sending your old device back in exchange for a new one. If you have sensitive or secure data on those devices, how will they be disposed of? An option is to not leverage return to vendor or manufacture warranties opting for self-disposition, or using self-encrypting devices (SEDs).

This wraps up this post, coming up next in part V we will look at what to use when, where along with other options and some trends.

Ok, nuff said (for now).

Cheers gs

Hard Disk Drives (HDD) for virtual environments (Part III) from form factor to power

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

In part II of this series we covered some of the differences between various Hard Disk Drive (HDD) including looking beyond the covers at availability, cache and cost. Let us pick up where we left off on our look beyond the covers to help answer the question of which is the best HDD to use.

Form factor (physical attributes)

Physical dimensions including 2.5” small form factor (SFF) and 3.5” large form factor (LFF) HDDs. 2.5” HDDs are available in 7mm, 9mm and larger 14mm height form factors. Note that taller drives tend to have more platters for capacity. In the following image note that the bottom HDDs is taller than the others are.

Hard Disk Drive Sizes
Top thin 7mm, middle 9mm, and bottom 15mm (thick)

The above tall or “thick” (not to be confused with thick or thin provisioned) is a SFF 5.4K RPM 1.5TB drive that I use as an on-site backup or data protection target and buffer. The speed is good enough for what I use it for, and provides good capacity per cost in a given footprint.

Also, note that there is a low profile 7mm device (e.g. middle) that for example can fit into my Lenovo X1 laptop as a backup replacement for the Samsung SSD that normally resides there. Also shown on the top is a standard 9mm height 7.2K Momentus XT HHDD with 4GB of slc nand flash and 500GB of regular storage capacity.

Functionality

Functionality include rebuild assist, secure erase, self-encrypting device (SEDs) without or without FIPS, RAID assist, support for large file copy (e.g. for cloud, object storage and dispersal or erasure code protection). Other features include intelligent power management (beyond first generation MAID), native command queue (NCQ), and Advanced Format (AF) 4Kbyte block and 512 byte emulation). Features also include those for high-density deployments such as virtualization and cloud such as vibration management in addition to SMART (Self-Monitoring, Analysis, and Reporting Technology) reporting and analysis.

Drives can also depending on vendor, make and model support various block or sector sizes including standard 512, 520, 524 and 528 for  different operating systems, hypervisors or controllers. Another feature mentioned above is the amount of volatile (DRAM) or persistent (nand flash) cache for read and read-ahead. Some drives are optimized for standalone or JBOD (Just a Bunch of Disks) and others for use with RAID controllers. By the way, put several SSD drives into an enclosure without a controller and you have Just a Bunch Of SSDs or JBOS.  What this means is that some drives are optimized to work with RAID arrays and how they chunk or shard data while others are for non-RAID use.

Speaking of RAID and HDDs, have you thought about your configuration settings, particular if working with big data or big bandwidth and large files or objects? If not you should including paying attention to stripe, chunk or shard size of how much data gets written to each device. With larger IO sizes, revisit what the default settings are to determine if you need to make some adjustments. Just as some drives are optimized for working with RAID controllers or software, there are some drives being optimized for cloud and object storage along with big data applications. The differences is that these drives are optimized for moving larger chunks or amounts of data usually associated with distributed data dispersal, erasure coding and enhanced RAID solutions. An example of a cloud storage optimized HDD is the Seagate Constellation CS (Cloud Storage).

Moving on, some drives are designed to be spinning or in constant use while others for starting and stopping such as with a notebook or desktop. Other features appearing in HDDs support high-density, along with hot and humid environments for cloud and managed service provider or big data needs. The various features and functionality can be part of the firmware enabled for a particular device along with hard features built into the device.

Interface type and speed

The industry trend is moving towards 6Gb SAS for HDDs similar to that for SSD drives. However, there is also plenty of 6Gb SATA activity, along with continued 4Gb Fibre Channel (4GFC) that eventually will transition to SAS. There is also prior generation 3Gb SAS and 3Gb SATA and you might even have some older 1.5Gb SAS or SATA devices around, maybe even some Parallel ATA (PATA) or Ultra320 (Parallel SCSI). Note that SATA devices can plug into and work with SAS adapters and controllers, however not the other way around.

Note that if you see or hear about a storage system or controller with back-end 8Gb Fibre Channel, chances are the HDD would auto-throttle negotiate down to 4GFC. In addition to the current 6Gb speed of SAS, there are improvements in the works for 12Gb and beyond, along with many different topology or configuration options. If you are interested in learning more about SAS, check out SAS SANs for Dummies sponsored by LSI that I helped write.

Notice I did not mention iSCSI, USB, Thunderbolt or other interfaces and protocols? Some integrators and vendors offer drives with those among other interfaces, they are usually SAS or SATA with a bridge, router or converter interface attached to them externally or as part of their packaging (See following image).

Performance of the device

A common high-level gauge of drive performance is the platter rotational speed.  However there is other metrics including seek time, transfer rate and latency. These in turn vary based on peak and sustained, read or write, random or sequential, large or small IOPS or transfer requests. There are many different numbers floating around as to how many IOPS a HDD can do based on its rotational speed among other factors. The challenge with these numbers or using them is putting into context of what size the IOP is, was it a read or write, large or small, random or sequential relative to your needs. Another challenge is how those IOPs are measured, for example were the measured below a file system to negate buffering, or via a file system.

Rotational speed such as 5,400 (5.4K) revolutions per minute (RPM), 7.2K, 10K and 15K RPMs. Note that while a general indicator of relative speed, some of the newer 10K SFF (e.g. 2.5”) HDDs provide the same or better performance of earlier generation 3.5” 15K devices. This is accomplished with a combination of smaller form factor (spiral transfer rate) and improvements in read/write electronics and firmware. The benefit is that in the same or smaller footprint, more devices, performance and capacity can be packaged as well as the devices individually using less power. Granted if you pack more devices into a given footprint, the aggregate power might increase, however so too does the potential performance, availability, capacity and economics depending on implementation. You can see the differences in performance using various HDDs including an HHDD in this post here that looked at Windows impact for VDI planning.

This wraps up this post, up next part IV, we continue our look beyond the covers to determine the differences and what HDD is best for your virtual or other data storage needs.

Ok, nuff said (for now).

Cheers gs

 

Hard Disk Drives (HDD) for virtual environments (Part II) how drives differ

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

In part I of this series we looked at basic Hard Disk Drive (HDD) characteristics and wrapped up with the question of what is the best type of HDD to use?

I often get asked why there needs to be different types or tiers of data storage devices including HDD and Solid State Devices (SSDs), along with interfaces, why not just one or a few? Continue reading Hard Disk Drives (HDD) for virtual environments (Part II) how drives differ

Hard Disk Drives (HDD) for virtual environments (Part I)

StorageNetworkingBy Greg Schulz, Server and StorageIO @storageio

Unless you are one of the few who have gone all solid-state devices (SSDs) for your virtual environment, hard disk drives (HHDs) still have a role. That role might be for primary storage of your VMs and/or their data, or as a destination target for backups, snapshots, archiving or as a work and scratch area. Or perhaps you have some HDDs as part of a virtual storage appliance (VSA), storage virtualization, virtual storage or storage hypervisor configuration. Even if you have gone all SSD for your primary storage, you might be using disk as a target for backups complimenting or replacing tape and clouds. On the other hand, maybe you have a mix of HDD and SSD for production, what are you doing with your test, development or lab systems, both at work and at home.

Despite the myth of being dead or having been replaced by SSDs (granted their role is changing), HDD as a technology continues to evolve in many areas.

General storage characteristics include:

  • Internal or external to a server or, dedicated or shared with others
  • Performance in bandwidth, activity, or IOPS and response time or latency
  • Availability and reliability, including data protection and redundant components
  • Capacity or space for saving data on a storage medium
  • Energy and economic attributes for a given configuration
  • Functionality and additional capabilities beyond read/write or storing data

Capacity is increasing in terms of aerial density (amount of data stored in a given amount of space on HDD platters, as well as number of platters stacked into a given form factor. Today there are two primary form factors for HDDs as well as SSDs (excluding PCIe cards) which are 3.5” and 2.5” small form factor (SFF) widths available in various heights.

Mix of Hard Disk Drives
Mix of HDDs size, types and form factors

On the left is a 2.5” 1.5TB Seagate Freeplay HDD with a USB or eSATA connection that I use for removable media. On the right, a couple of 3.5” 7200 HDDs of various capacities size, in the center back, an older early generation Seagate Barracuda. In the middle, a stack of HDD, HHDD and SSD 2.5” devices including thin 7mm, 9mm and thick 15mm heights. Note that thick and thin refer to the height of the device as opposed to thin or thick provisioned.

Hard Disk Drive Sizes
Top thin 7mm, middle 9mm, and bottom 15mm (thick)

In addition to form factor, capacity increases and cost reductions, other improvements include reliability in terms of mean time between failure (MTBF) and annual failure rate (AFR). There have also been some performance enhancements across the various types of HDDs, along with energy efficiency and effectiveness improvements. Functionality has also been enhanced with features such as self-encrypting disks (SEDs) or full disk encryption (FDE).

Data is accessed on the disk storage device by a physical and a logical address, sometimes known as a physical block number (PBN) and a logical block number (LBN). The file system or an application performing direct (raw) I/O keeps track of what storage is mapped to which logical blocks on what storage volumes. Within the storage controller and disk drive, a mapping table is maintained to associate logical blocks with physical block locations on the disk or other medium such as tape.

Hard disk drive storage organization
Hard disk drive storage organization

When data is written to disk, regardless of whether it is an object, file, Web database, or video, the lowest common denominator is a block of storage. Blocks of storage have been traditionally organized into 512-bytes, which aligned with memory page sizes. While 512-byte blocks and memory page sizes are still common, given larger-capacity disk drives as well as larger storage systems, 4KB (e.g., 8 × 512 bytes or 4,096 bytes) block sizes are appearing called Advanced Format (AF). The transition to Advanced Format (AF) 4KB is occurring over time with some HDDs and SSDs supporting it now along with emulating 512-byte sectors. As part of the migration to AF, some drives have the ability of doing alignment work in the background off-loading server or external software requirements. Also related to HDD drive size are optional format sizes such as 528 byte used by some operating systems or storage systems.

Larger block sizes enable more data to be managed or kept track of in the same footprint by requiring fewer pointers or directory entries. For example, using a 4KB block size, eight times the amount of data can be kept track of by operating systems or storage controllers in the same footprint. Another benefit is that with data access patterns changing along with larger I/O operations, 4KB makes for more efficient operations than the equivalent 8 × 512 byte operations for the same amount of data to be moved.

At another detailed layer, the disk drive or flash solid-state device also handles bad block vectoring or replacement transparently to the storage controller or operating system. Note that this form or level of bad block repair is independent of upper-level data protection and availability features, including RAID, backup/restore, replication, snapshots, or continuous data protection (CDP), among others.

There are also features to optimize HDDs for working with RAID systems, or for doing for file copies such as for use with cloud and object storage systems. Some HDDs are optimized for start/stop operations found in laptops along with vibration damping, while others support continuous operation modes. Other features include energy management with spin down to conserve power, along with intelligent power management (IPM) to vary the performance and amount of energy used.

In addition to drive capacity sizes that range up to 4TB on larger 3.5” form factor HDDs, there are also different sizes of DRAM buffers (measured in Mbytes) available on HDDs. Hybrid HDD (HHDDs) in addition to having DRAM buffers also have SLC or MLC nand flash measured in GBytes for even larger buffers as either read, or read/write. For example the HHDDs that I have in some of my laptops as well as VMware ESXi servers have 4GB SLC  for a 500GB 7,200 RPM device (Seagate Momentus XT I) or 750GB with 8GB SLC (Seagate Momentus XT II) and are optimized for reads. In the case of a HHDD in my ESXi server, I used this trick I learned from Duncan Epping to make a Momentus XT appear to VMware as a SSD. Other performance optimization options include native command queuing, and target mode addressing which in turns gets mapped into for example VMware device mappings (e.g. vmhba0:C0:T1:L0).

Stack of Hard Disk Drives
A stack of 2.5” HDDs, HHDDs and SSDs.

Other options for HDDs include speed with 5,400 (5.4K) revolutions per minute (RPM) being at the low end, and 15,000 (15K) RPMs at the high-end with 7,2K and 10K speeds also being available. Interfaces for HDDs include SAS, SATA and Fibre Channel (FC) operating at various speeds. If you look or shop around, you might find some parallel ATA or PATA devise still available should you need them for use or nostalgia. FC HDDs operate at 4G where SAS and SATA devices can operate at up to 6Gb with 3Gb and 1.5Gb backwards compatibility. Note that if supported with applicable adapters, controllers and enclosures, SAS can also operate in wide modes. Check out SAS SANs for Dummies to learn more about SAS, which also supports attachment of SATA devices.

Ok, did you catch that I did not mention USB or iSCSI HDDs? Nope, that was not a typo in that while you can get packaged HDDs or SSDs with USB, iSCSI, Firewire or Thunderbolt attachments, they utilize either a SAS or SATA HDD. Inside the packaging will be a bridge or gateway card or adapter that converts from for example SATA to USB. In addition to packaging, converters are also available as docking stations, enclosures or cables. For example, I have some Seagate GoFlex USB to SATA and eSATA to SATA cables for attaching different devices as needed to various systems.

Hard Disk Drive Cables
Top eSATA to SATA and bottom USB to SATA cable

Besides drive size (form factor) and space capacity, interface and speed, along with features, there are some other differences which are enterprise class (both high performance and high capacity) along with desktop and laptop, internal and external use. These drives can be available via OEMs (server and storage vendors) or systems integrators with their own special firmware or as generic devices. What this means is that not all SATA or SAS HDDs are the same from enterprise to desktop across both 2.5” and 3.5” form factors. Even the HDDs that you can buy for example from Amazon will vary based on the above and other factors.

So which HDD is best for your needs?

That will depend on what you need or want to do among other criteria that we will look at in a follow-up post.

Ok, nuff said for now.

Google Circle
Join my Circle on Google+

Plugin by Social Author Bio