The implacable march of Amazon Web Services toward ultimate public cloud domination has been relentless, from its inception in 2006 with a single service (S3 Storage) to the behemoth it has become today. It seems this minnow has become the biggest fish in the pond. But is it unstoppable? Has it won the public cloud wars?
SDDC & Hybrid Cloud
Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
A while ago, I was listening to the GreyBeards on Storage podcast. One idea suggested by a guest is that application storage is going to end up in one of two forms. There will only be flash and cloud: specifically, flash in a mobile device and object storage in the cloud. I don’t buy that as the end state, but I do see flash and cloud as the two growth areas of storage. I am seeing flash as a performance tier close to consumption, and a separate persistent capacity tier. I am also hearing a lot more about the persistent tier being farther away, across town or across the country. At recent Tech Field Day events, I saw a couple of companies that are putting flash storage close to users and persistent storage farther away. ClearSky provides a managed primary storage service, and Avere wants to alleviate your NAS performance problems.
Although it is becoming less interesting over time, the hypervisor is still the cornerstone of the modern data center. As we enter the age of the hybrid cloud, that data center is stretching into the cloud. With the rise of containers, we are seeing clouds move to bare metal once more. While this works for new applications, it does not necessarily work for existing ones. Through 2017, the hypervisor will still be important to the data center and to many clouds. After 2017, we will see; it depends on the impact of many new technologies. Here is our 2016–2017 cost comparison spreadsheet.
In the beginning, there was Hewlett and there was Packard, and they formed a company called Hewlett-Packard (HP) and the rest, as they say, is history. Yes Hewlett-Packard picked up some companies along the way: DEC, Compaq, Autonomy, and EDS, to name a few. HP had its fingers in many pies, acquiring numerous technology companies while attempting to become a one-stop provider of everything: storage (3PAR, LeftHand Networks), compute (Compaq, Neoware), networking (Metrix Network Systems, Colubris Networks, 3Com, Aruba). It also acquired various software companies (Persist Technologies, Novadigm, RLX Technologies, Opsware, Autonomy) and professional service providers (Atos Origin Middle East Group, CGNZ, ManageOne, EDS), among others along the way. Things seemed to be going the right way for it, but then along came this slightly disruptive technology called virtualization. HP the hardware company weathered it. Now we are in the cloud era, and everybody is doing everything they can to become software defined, virtual SANs, virtual networks—the list goes on.
Someone suggested “the next-generation data center” to me as a topic for an upcoming panel discussion. Here are my thoughts on the subject.
This is the first of many comparisons and commentaries on data protection within the hybrid cloud. We are looking at the mechanisms used to achieve data protection. Mechanisms—how boring—yet from an architectural and data management view, mechanisms become increasingly important. The mechanisms available can impact the costs of your data protection. One example: it is often thought that data protection is instantaneous. It isn’t. It has a window of execution measured in hours, not microseconds. If you need microsecond data protection, you may need other tools to fill that need.
The first things to decide are what you need in the way of time to recover your application (recovery time objective, or RTO) as well as how much data loss you can stomach during recovery (recovery point objective, or RPO). RPO determines how often data protection should be used, while RTO governs how soon recovery will be completed once started. This pair of critical factors will control what mechanisms are important within your organization. Beyond those two, there are other, equally important mechanisms that influence the types of recovery mechanisms in use.