In February, several of the top virtualization vendor companies released their fourth quarter results and guidance. From these releases, we should be able to get an understanding of how the companies are performing. VMware, Citrix, and Red Hat are the companies for which I have some data to share. The source of the revenue reports is Cleveland Research Company, and the sources of the individual company reports are CRC and FactSet Estimates.
SDDC & Hybrid Cloud
Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
As I’ve thought about how to implement high-performance, very large-scale networks within a secure hybrid cloud, I have come to the conclusion that the cloud works best with disaggregated network functions. This is the goal of network function virtualization, or NFV, but the real problem is knowing what functions to virtualize and how to do so at scale. Very large scale. We need to consider the multipaths our data will take and the rates at which data can pass through the various virtual components of our system that makes up the hybrid cloud. When we think hybrid cloud, we need to think scale out, not up. Scaling up can cost lots of money, while scaling out may save dollars. This means we need to rethink networking and security as well as protection. With containers on my mind, we have a path for our journey.
In our data protection research, we have discovered that there are quite a number of companies that say they do Disaster Recovery as a Service (DRaaS). Just what is DRaaS? What are the basic requirements? Is using a public cloud better than using hosted DRaaS? Are there any risks? Is DRaaS just a dump-and-go? Is DRaaS just another managed services play? There are many questions—now, let us look at some answers.
There has been plenty of discussion about what the Internet of Things (IoT) means for IT and for storage vendors. The usual answer is that IoT will soon be the largest consumer of storage. Our basic expectation is that IoT will feed a lot of object storage in the cloud or in central corporate data centers. Personally, I am deeply suspicious when I’m told that there is only one way to solve an IT problem. We are starting to see that IoT data is often processed near the IoT device, and only a subset of data is transferred to the central object store. I think that this will drive a lot of compute and storage to the edge of the network, close to the IoT devices. I think we will see a whole new category of products that will be deployed close to IoT sensors.
Everywhere we look, cloud is the big buzzword. Digital transformation insists that we embrace cloud computing as the next evolution of our enterprise. Microsoft, Amazon, Citrix, and VMware, among many others, are all focused on cloud and mobility as the next logical step forward. Upstart companies like Netflix are the trailblazers, embracing the cloud completely and putting less agile competitors like Blockbuster to the sword.
We are being fed a diet of marketing that tells us we should ignore the cloud at our peril. In response, what are the pertinent questions we should be asking to ensure that we make the right decisions for the future shape of our enterprise IT?
Nothing quite changes like IT. We have gone from incredibly manual, thought-requiring human processes to handling petabytes of data to make a single decision. In essence, our requirements have changed to meet our real-world needs, whether such change has been to improve performance, capacity, or other needs. Requirements rule the world of IT. Recently, we have seen an additional shift in requirements. TVP Strategy is currently looking at a small set of IT: data protection. Our approach has been to produce a coverage graph. The graph gives us a nice visual on how vendors’ products match up. But that is not all. We recently did some analysis comparing products over time as our coverage graph requirements have evolved. The results of these comparisons over time are very interesting.