Hotlink and their Cross-Platform Cloud Management technology have been in the news recently with the announcement of the latest release and the release of the free version of their flagship product, Hotlink SuperVISOR for VMware vCenter. This technology extends the VMware vCenter management capabilities to Microsoft Hyper-V, Citrix XenServer, and Red Hat Enterprise Linux (KVM). Bernd Harzog did a great post covering this latest release so no need to repeat things, but I would like to share my thoughts on how this type of technology has the potential to fundamentally change the direction of virtualization and/or cloud computing.
One of the secret ingredients in Hotlink’s technology is the Transformation Engine, which basically decouples VMware vCenter from the vSphere hypervisor so that multiple different hypervisors can be controlled via VMware vCenter Server. The Transformation Engine is what I would call the integration engine, in that it performs the translation between technologies. I wonder if during the Hotlink development, the ability to decouple and manage all the different hypervisors was the project plan all along, or was it an added bonus discovered during development of the Transformation Engine?
Additional feature or added bonus, call it what you want, but I think this is going to open some doors in cross platform features. Hotlink is just the first of what may be many different cross-platform strategies. In the way that Hotlink has made VMware vCenter Server the centralized management point, I think there will be other companies that will present similar technology; having Microsoft’s System Center as the management point would be just one example.
Now here is where it can get really good. Once the cross-platform management concept really takes off, when we really have a choice of which technology we want to use as the central management point, we could really get to a point where certain features could be cherry picked and used with all systems. What I mean is hopefully there will be the ability to take advantage of specific features that are available for a specific technology. One example with the Hotlink technology is taking advantage of VMware’s Distributed Resource Scheduler (DRS) and expanding the use of the technology to a Microsoft Hyper-V Cluster. An additional bonus features would be the ability to take advantage of VMware’s vCenter Operations.
Would it be too much to think that in the future we might be able to take advantage of different features from the different platforms to pick which features we could use and apply to the infrastructure as a whole? Why not? The integration engine is the key to keep everything talking to each other.
By design or by accident, cross-platform cloud management has opened a door to a possibility that I don’t think VMware, Microsoft, Citrix, or any other hypervisor vender would have thought might happen. Will this “feature” continue to grow and expand or will functionality be diminished or removed? Time will tell and we will just have to see for ourselves.
AppDynamics has just raised $50m and New Relic has just raised $80m, both in preparation for going public. The legacy APM vendors are about to have a really serious problem. These funding rounds prove that some of the smartest investors in the world now believe that virtualization, cloud computing, new languages, and dynamic run time environments combine to create both a brand new set of requirements for a relevant management stack and the opportunity for a brand new set of vendors to be both the platforms for that management stack and the foundations of that new management stack.
Most of the private clouds that have been implemented to date have focused upon transient workloads like development, QA, load testing, pilot, pre-production, and training. Private clouds are great for these use cases because private clouds automate the process of creating and tearing down these transient environments as is needed. However, production business critical applications are not transient and most do not need to scale up and down with demand. This creates an entirely different set of requirements for putting these business critical applications in private clouds. Continue reading Virtualized Business Critical Applications – In Your Private Cloud→
VMware has published a fascinating white paper which goes through the resource requirements, performance requirements, suggested configurations, performance comparison between physical and virtual environments and whether or not there are licensing issues for various business critical applications.
The VMware Virtualizing Business Critical Apps White Paper
VMware has also published a list of worldwide partners who have specific domain expertise in virtualizing business critical applications. That matrix is below.
More information on virtualizing business critical applications can be found at Blogs.VMware/Apps/
Having successfully virtualized all of the low hanging fruit in its customer base, VMware has now put a serious focus upon virtualizing business critical and performance critical applications. That focus includes not just highlighting progress to date, but also working with partners who have specific domain expertise is specific applications.
Browsium have released Catalyst, a browser management utility designed to make deploying multiple browsers in the enterprise a manageable reality.
The browser is a gateway to the Internet, to applications, to data, to the corporate intranet. Outside of the office, its not uncommon to switch between browser versions between devices: or even have different browsers on the same device. My Google App world is ably accessed from a Chrome experience synchronised between devices, but I have Internet Explorer on-hand, and Firefox still gets a run out all be it increasingly less so.
Indeed, for many corporations such care-free browser relationships are equally common. This might be because different browser versions are required to maintain access to legacy applications; to give users more choice; an effort to reduce the impact of a browser security issue. Alternatively, because control of different browser environments has been complex in the past, it is deemed less cumbersome and risky to mandate a single browser environment.
With the release of Catalyst, can care-free relationships be afforded a level of sensible protection? Can restrictive single-browser choices be relaxed and more business user friendly? Browsium intend Catalyst to reduce helpdesk calls and improve IT security allowing more granular control of all browsers in the enterprise and how does it do that?
Unless you are one of the few who have gone all solid-state devices (SSDs) for your virtual environment, hard disk drives (HHDs) still have a role. That role might be for primary storage of your VMs and/or their data, or as a destination target for backups, snapshots, archiving or as a work and scratch area. Or perhaps you have some HDDs as part of a virtual storage appliance (VSA), storage virtualization, virtual storage or storage hypervisor configuration. Even if you have gone all SSD for your primary storage, you might be using disk as a target for backups complimenting or replacing tape and clouds. On the other hand, maybe you have a mix of HDD and SSD for production, what are you doing with your test, development or lab systems, both at work and at home.
Despite the myth of being dead or having been replaced by SSDs (granted their role is changing), HDD as a technology continues to evolve in many areas.
General storage characteristics include:
Internal or external to a server or, dedicated or shared with others
Performance in bandwidth, activity, or IOPS and response time or latency
Availability and reliability, including data protection and redundant components
Capacity or space for saving data on a storage medium
Energy and economic attributes for a given configuration
Functionality and additional capabilities beyond read/write or storing data
Capacity is increasing in terms of aerial density (amount of data stored in a given amount of space on HDD platters, as well as number of platters stacked into a given form factor. Today there are two primary form factors for HDDs as well as SSDs (excluding PCIe cards) which are 3.5” and 2.5” small form factor (SFF) widths available in various heights.
On the left is a 2.5” 1.5TB Seagate Freeplay HDD with a USB or eSATA connection that I use for removable media. On the right, a couple of 3.5” 7200 HDDs of various capacities size, in the center back, an older early generation Seagate Barracuda. In the middle, a stack of HDD, HHDD and SSD 2.5” devices including thin 7mm, 9mm and thick 15mm heights. Note that thick and thin refer to the height of the device as opposed to thin or thick provisioned.
In addition to form factor, capacity increases and cost reductions, other improvements include reliability in terms of mean time between failure (MTBF) and annual failure rate (AFR). There have also been some performance enhancements across the various types of HDDs, along with energy efficiency and effectiveness improvements. Functionality has also been enhanced with features such as self-encrypting disks (SEDs) or full disk encryption (FDE).
Data is accessed on the disk storage device by a physical and a logical address, sometimes known as a physical block number (PBN) and a logical block number (LBN). The file system or an application performing direct (raw) I/O keeps track of what storage is mapped to which logical blocks on what storage volumes. Within the storage controller and disk drive, a mapping table is maintained to associate logical blocks with physical block locations on the disk or other medium such as tape.
When data is written to disk, regardless of whether it is an object, file, Web database, or video, the lowest common denominator is a block of storage. Blocks of storage have been traditionally organized into 512-bytes, which aligned with memory page sizes. While 512-byte blocks and memory page sizes are still common, given larger-capacity disk drives as well as larger storage systems, 4KB (e.g., 8 × 512 bytes or 4,096 bytes) block sizes are appearing called Advanced Format (AF). The transition to Advanced Format (AF) 4KB is occurring over time with some HDDs and SSDs supporting it now along with emulating 512-byte sectors. As part of the migration to AF, some drives have the ability of doing alignment work in the background off-loading server or external software requirements. Also related to HDD drive size are optional format sizes such as 528 byte used by some operating systems or storage systems.
Larger block sizes enable more data to be managed or kept track of in the same footprint by requiring fewer pointers or directory entries. For example, using a 4KB block size, eight times the amount of data can be kept track of by operating systems or storage controllers in the same footprint. Another benefit is that with data access patterns changing along with larger I/O operations, 4KB makes for more efficient operations than the equivalent 8 × 512 byte operations for the same amount of data to be moved.
At another detailed layer, the disk drive or flash solid-state device also handles bad block vectoring or replacement transparently to the storage controller or operating system. Note that this form or level of bad block repair is independent of upper-level data protection and availability features, including RAID, backup/restore, replication, snapshots, or continuous data protection (CDP), among others.
There are also features to optimize HDDs for working with RAID systems, or for doing for file copies such as for use with cloud and object storage systems. Some HDDs are optimized for start/stop operations found in laptops along with vibration damping, while others support continuous operation modes. Other features include energy management with spin down to conserve power, along with intelligent power management (IPM) to vary the performance and amount of energy used.
In addition to drive capacity sizes that range up to 4TB on larger 3.5” form factor HDDs, there are also different sizes of DRAM buffers (measured in Mbytes) available on HDDs. Hybrid HDD (HHDDs) in addition to having DRAM buffers also have SLC or MLC nand flash measured in GBytes for even larger buffers as either read, or read/write. For example the HHDDs that I have in some of my laptops as well as VMware ESXi servers have 4GB SLC for a 500GB 7,200 RPM device (Seagate Momentus XT I) or 750GB with 8GB SLC (Seagate Momentus XT II) and are optimized for reads. In the case of a HHDD in my ESXi server, I used this trick I learned from Duncan Epping to make a Momentus XT appear to VMware as a SSD. Other performance optimization options include native command queuing, and target mode addressing which in turns gets mapped into for example VMware device mappings (e.g. vmhba0:C0:T1:L0).
Other options for HDDs include speed with 5,400 (5.4K) revolutions per minute (RPM) being at the low end, and 15,000 (15K) RPMs at the high-end with 7,2K and 10K speeds also being available. Interfaces for HDDs include SAS, SATA and Fibre Channel (FC) operating at various speeds. If you look or shop around, you might find some parallel ATA or PATA devise still available should you need them for use or nostalgia. FC HDDs operate at 4G where SAS and SATA devices can operate at up to 6Gb with 3Gb and 1.5Gb backwards compatibility. Note that if supported with applicable adapters, controllers and enclosures, SAS can also operate in wide modes. Check out SAS SANs for Dummies to learn more about SAS, which also supports attachment of SATA devices.
Ok, did you catch that I did not mention USB or iSCSI HDDs? Nope, that was not a typo in that while you can get packaged HDDs or SSDs with USB, iSCSI, Firewire or Thunderbolt attachments, they utilize either a SAS or SATA HDD. Inside the packaging will be a bridge or gateway card or adapter that converts from for example SATA to USB. In addition to packaging, converters are also available as docking stations, enclosures or cables. For example, I have some Seagate GoFlex USB to SATA and eSATA to SATA cables for attaching different devices as needed to various systems.
Besides drive size (form factor) and space capacity, interface and speed, along with features, there are some other differences which are enterprise class (both high performance and high capacity) along with desktop and laptop, internal and external use. These drives can be available via OEMs (server and storage vendors) or systems integrators with their own special firmware or as generic devices. What this means is that not all SATA or SAS HDDs are the same from enterprise to desktop across both 2.5” and 3.5” form factors. Even the HDDs that you can buy for example from Amazon will vary based on the above and other factors.
So which HDD is best for your needs?
That will depend on what you need or want to do among other criteria that we will look at in a follow-up post.