Browsium have released Catalyst, a browser management utility designed to make deploying multiple browsers in the enterprise a manageable reality.
The browser is a gateway to the Internet, to applications, to data, to the corporate intranet. Outside of the office, its not uncommon to switch between browser versions between devices: or even have different browsers on the same device. My Google App world is ably accessed from a Chrome experience synchronised between devices, but I have Internet Explorer on-hand, and Firefox still gets a run out all be it increasingly less so.
Indeed, for many corporations such care-free browser relationships are equally common. This might be because different browser versions are required to maintain access to legacy applications; to give users more choice; an effort to reduce the impact of a browser security issue. Alternatively, because control of different browser environments has been complex in the past, it is deemed less cumbersome and risky to mandate a single browser environment.
With the release of Catalyst, can care-free relationships be afforded a level of sensible protection? Can restrictive single-browser choices be relaxed and more business user friendly? Browsium intend Catalyst to reduce helpdesk calls and improve IT security allowing more granular control of all browsers in the enterprise and how does it do that?
Unless you are one of the few who have gone all solid-state devices (SSDs) for your virtual environment, hard disk drives (HHDs) still have a role. That role might be for primary storage of your VMs and/or their data, or as a destination target for backups, snapshots, archiving or as a work and scratch area. Or perhaps you have some HDDs as part of a virtual storage appliance (VSA), storage virtualization, virtual storage or storage hypervisor configuration. Even if you have gone all SSD for your primary storage, you might be using disk as a target for backups complimenting or replacing tape and clouds. On the other hand, maybe you have a mix of HDD and SSD for production, what are you doing with your test, development or lab systems, both at work and at home.
Despite the myth of being dead or having been replaced by SSDs (granted their role is changing), HDD as a technology continues to evolve in many areas.
General storage characteristics include:
Internal or external to a server or, dedicated or shared with others
Performance in bandwidth, activity, or IOPS and response time or latency
Availability and reliability, including data protection and redundant components
Capacity or space for saving data on a storage medium
Energy and economic attributes for a given configuration
Functionality and additional capabilities beyond read/write or storing data
Capacity is increasing in terms of aerial density (amount of data stored in a given amount of space on HDD platters, as well as number of platters stacked into a given form factor. Today there are two primary form factors for HDDs as well as SSDs (excluding PCIe cards) which are 3.5” and 2.5” small form factor (SFF) widths available in various heights.
On the left is a 2.5” 1.5TB Seagate Freeplay HDD with a USB or eSATA connection that I use for removable media. On the right, a couple of 3.5” 7200 HDDs of various capacities size, in the center back, an older early generation Seagate Barracuda. In the middle, a stack of HDD, HHDD and SSD 2.5” devices including thin 7mm, 9mm and thick 15mm heights. Note that thick and thin refer to the height of the device as opposed to thin or thick provisioned.
In addition to form factor, capacity increases and cost reductions, other improvements include reliability in terms of mean time between failure (MTBF) and annual failure rate (AFR). There have also been some performance enhancements across the various types of HDDs, along with energy efficiency and effectiveness improvements. Functionality has also been enhanced with features such as self-encrypting disks (SEDs) or full disk encryption (FDE).
Data is accessed on the disk storage device by a physical and a logical address, sometimes known as a physical block number (PBN) and a logical block number (LBN). The file system or an application performing direct (raw) I/O keeps track of what storage is mapped to which logical blocks on what storage volumes. Within the storage controller and disk drive, a mapping table is maintained to associate logical blocks with physical block locations on the disk or other medium such as tape.
When data is written to disk, regardless of whether it is an object, file, Web database, or video, the lowest common denominator is a block of storage. Blocks of storage have been traditionally organized into 512-bytes, which aligned with memory page sizes. While 512-byte blocks and memory page sizes are still common, given larger-capacity disk drives as well as larger storage systems, 4KB (e.g., 8 × 512 bytes or 4,096 bytes) block sizes are appearing called Advanced Format (AF). The transition to Advanced Format (AF) 4KB is occurring over time with some HDDs and SSDs supporting it now along with emulating 512-byte sectors. As part of the migration to AF, some drives have the ability of doing alignment work in the background off-loading server or external software requirements. Also related to HDD drive size are optional format sizes such as 528 byte used by some operating systems or storage systems.
Larger block sizes enable more data to be managed or kept track of in the same footprint by requiring fewer pointers or directory entries. For example, using a 4KB block size, eight times the amount of data can be kept track of by operating systems or storage controllers in the same footprint. Another benefit is that with data access patterns changing along with larger I/O operations, 4KB makes for more efficient operations than the equivalent 8 × 512 byte operations for the same amount of data to be moved.
At another detailed layer, the disk drive or flash solid-state device also handles bad block vectoring or replacement transparently to the storage controller or operating system. Note that this form or level of bad block repair is independent of upper-level data protection and availability features, including RAID, backup/restore, replication, snapshots, or continuous data protection (CDP), among others.
There are also features to optimize HDDs for working with RAID systems, or for doing for file copies such as for use with cloud and object storage systems. Some HDDs are optimized for start/stop operations found in laptops along with vibration damping, while others support continuous operation modes. Other features include energy management with spin down to conserve power, along with intelligent power management (IPM) to vary the performance and amount of energy used.
In addition to drive capacity sizes that range up to 4TB on larger 3.5” form factor HDDs, there are also different sizes of DRAM buffers (measured in Mbytes) available on HDDs. Hybrid HDD (HHDDs) in addition to having DRAM buffers also have SLC or MLC nand flash measured in GBytes for even larger buffers as either read, or read/write. For example the HHDDs that I have in some of my laptops as well as VMware ESXi servers have 4GB SLC for a 500GB 7,200 RPM device (Seagate Momentus XT I) or 750GB with 8GB SLC (Seagate Momentus XT II) and are optimized for reads. In the case of a HHDD in my ESXi server, I used this trick I learned from Duncan Epping to make a Momentus XT appear to VMware as a SSD. Other performance optimization options include native command queuing, and target mode addressing which in turns gets mapped into for example VMware device mappings (e.g. vmhba0:C0:T1:L0).
Other options for HDDs include speed with 5,400 (5.4K) revolutions per minute (RPM) being at the low end, and 15,000 (15K) RPMs at the high-end with 7,2K and 10K speeds also being available. Interfaces for HDDs include SAS, SATA and Fibre Channel (FC) operating at various speeds. If you look or shop around, you might find some parallel ATA or PATA devise still available should you need them for use or nostalgia. FC HDDs operate at 4G where SAS and SATA devices can operate at up to 6Gb with 3Gb and 1.5Gb backwards compatibility. Note that if supported with applicable adapters, controllers and enclosures, SAS can also operate in wide modes. Check out SAS SANs for Dummies to learn more about SAS, which also supports attachment of SATA devices.
Ok, did you catch that I did not mention USB or iSCSI HDDs? Nope, that was not a typo in that while you can get packaged HDDs or SSDs with USB, iSCSI, Firewire or Thunderbolt attachments, they utilize either a SAS or SATA HDD. Inside the packaging will be a bridge or gateway card or adapter that converts from for example SATA to USB. In addition to packaging, converters are also available as docking stations, enclosures or cables. For example, I have some Seagate GoFlex USB to SATA and eSATA to SATA cables for attaching different devices as needed to various systems.
Besides drive size (form factor) and space capacity, interface and speed, along with features, there are some other differences which are enterprise class (both high performance and high capacity) along with desktop and laptop, internal and external use. These drives can be available via OEMs (server and storage vendors) or systems integrators with their own special firmware or as generic devices. What this means is that not all SATA or SAS HDDs are the same from enterprise to desktop across both 2.5” and 3.5” form factors. Even the HDDs that you can buy for example from Amazon will vary based on the above and other factors.
So which HDD is best for your needs?
That will depend on what you need or want to do among other criteria that we will look at in a follow-up post.
VMware has announced financial results for the fourth quarter of 2012 and for the entire year of 2012. Fourth quarter revenue came in at $1.29B growing 22% over the fourth quarter of 2011. Full year revenue came in at $4.61B also growing 22% over 2011. The full results are detailed in the table below:
We at The Virtualization Practice, LLC have migrated our business critical applications to the cloud. How simple was that task? It was not as easy as we have heard from others, and not as difficult as some have had, but it was not as simple as move my VM and run. Why is this? What are the methods available to move to the cloud? How do they stack up to what actually happens. Theory is all well and good, and I have read plenty of those architectures, but when the shoe leather hits the cloud where are we? Here is a short history, a comparison of methods, and some conclusions that can be drawn from our migration to the cloud. Continue reading Migrating Business Critical Applications to the Cloud→
A major aspect of virtualizing any business critical application is data protection which encompasses not only backup, but disaster recovery, and business continuity. It is imperative that our data be protected. While this is true of all workloads, it becomes a bigger concern when virtualizing business critical applications. Not only do we need backups, but we need to protect the business, which is where business continuity comes into play. Continue reading Virtualizing Business Critical Applications: Data Protection→
As I shoveled even more snow, I was starting to think about automation, as in how could I get something to shovel the snow for me, which lead to thinking about automation within the cloud. I see lots of discussion about automation in the cloud. Many of my friends and colleagues are developing code using Puppet, Chef, vCenter Orchestrator, etc. This development is about producing the software defined datacenter (SDDC). However, I see very little in the way of security automation associated with SDDC. Continue reading Security Automation = Good Security Practice→