We are in the midst of an analytics boom. Everywhere I look, I see analytics presented as the answer to everything from sweaty pores to security. They may even improve hair growth. That aside, analytics truly are invading everything we do. There are three types of analytics. Over reliance on any one type leaves businesses vulnerable to false positives. IT lives in the land of false positives. We disable and ignore seemingly false alerts, but are they really false? How can we gain more from our analytics?
This year has been the year when fake news became big news. But fake news isn’t really new: it has been going on for years. We see fake IT news all over the IT industry when partial truths and irrelevant benchmarks are used to sell products. The presence of fakes means that we need to assess our IT news carefully. The reality is that there is no source of news that can be trusted absolutely. You must evaluate the truth and the usefulness of each piece of IT news.
In February, several of the top virtualization vendor companies released their fourth quarter results and guidance. From these releases, we should be able to get an understanding of how the companies are performing. VMware, Citrix, and Red Hat are the companies for which I have some data to share. The source of the revenue reports is Cleveland Research Company, and the sources of the individual company reports are CRC and FactSet Estimates.
In the last three virtualization and cloud security podcasts, Mike Foley, Sr Technical Marketing Architect for vSphere Security, mentioned security disaster recovery plans. There is a growing need for such plans. The 174th podcast covered this need, as well as the why and the how of putting such plans together. Unlike traditional disaster recovery, security disaster recovery is just what it sounds like, recovering from a disastrous security event. How would your organization respond to such an event? Is it about incident response? It is more than this. While you listen to the podcast, consider these thoughts.
One of the big issues for virtual desktop infrastructure has always been controlling the cost per user without compromising the user experience. Another common thing has been that the largest VDI vendors each have had their own hypervisor. One of the significant elements of controlling cost is to automate the creation of the user’s desktop VMs. The ability of a VDI product to control a hypervisor is central to controlling its operational cost. It is also a useful capability for any hypervisor that is providing an API for VDI products. It turns out that Nutanix is the only major hypervisor vendor without its own VDI product.
As I’ve thought about how to implement high-performance, very large-scale networks within a secure hybrid cloud, I have come to the conclusion that the cloud works best with disaggregated network functions. This is the goal of network function virtualization, or NFV, but the real problem is knowing what functions to virtualize and how to do so at scale. Very large scale. We need to consider the multipaths our data will take and the rates at which data can pass through the various virtual components of our system that makes up the hybrid cloud. When we think hybrid cloud, we need to think scale out, not up. Scaling up can cost lots of money, while scaling out may save dollars. This means we need to rethink networking and security as well as protection. With containers on my mind, we have a path for our journey.