Last week was Oracle OpenWorld. It was held in San Francisco at the Moscone Center, which surprised me. I had thought it was closed for refurbishment, as this was the reason VMware had given for holding its annual US shindig in Las Vegas this year.
It seems like Oracle must always have a public enemy number one. Those of you with long enough teeth will remember spats it has had over the years with Microsoft and, more recently, Google and HPE. Well, it seems that Oracle has a new public enemy in its laser sights, and that is Amazon Web Services (AWS). The OpenWorld keynotes proclaimed that Oracle is now a real cloud player and the fastest growing cloud company out there. However, according to The Register, even the usually docile and compliant conference attendees were quite vociferous in denying this.
AWS has introduced a new way to consume its Amazon WorkSpaces cloud desktop service: desktops by the hour. The new service is designed to appeal to businesses with employees needing only occasional computer access and should allow many customers to reduce their costs, although buyers will need to pay close attention to how the service is used. Misconfigure it or underestimate the hours to be used, and you could see an increase in your existing bill.
In a previous article, I suggested that splitting DaaS into separate parts for the broker and the desktops would address some of the challenges of DaaS. Today, I’d like to take a closer look at how this might work.
How much is your data worth? How much does it cost to store your data? I doubt that you have numbers for either of these things. But maybe it is time to start thinking carefully about both of these numbers. If the cost of storing your data exceeds its value, then you probably shouldn’t be storing the data. The trigger for this thought is the oncoming Internet of Things (IoT) data tsunami. A few guestimates I’ve seen suggest that we will see around 50 Zettabytes (ZB) of data generated in the next five years. That is 50 Million Terabytes. One place to store 50ZB of data is on AWS’s cheapest storage, Glacier. At the cheapest published price for Glacier, your monthly bill would be over half a billion dollars. I wonder whether knowing the temperature inside my refrigerator every minute of the day is worth that much money.
There seems to be a trend of providers abandoning the commodity public cloud market. We saw HP exit its Helion Public Cloud, and more recently, Verizon shut down one of its Infrastructure as a Service (IaaS) products. At the same time, we see Amazon and Microsoft heavily committed to public cloud and making a lot of money. I think there is a fundamental difference between what the successful cloud providers and the commodity VM providers offer. The big difference is that successful cloud providers sell mostly non-commodity services. They sell services that are not available elsewhere. The value proposition for AWS and Azure is not really in running your VMs. It is in offering services that your applications, or users, can consume. These cloud services are consumed by application developers: information systems people rather than information technology. They lock in customers by delivering unique and valuable services. They have a low cost of entry to entice customers and a high cost of exit to retain customers.
There are a number of companies that are in a race to own the enterprise landscape when it comes to infrastructure automation and development pipelines (aka continuous integration and continuous deployment). What is unfolding here is very similar to what we have witnessed in the cloud market.