As we hear more about VMware, AWS, and IBM, a new story emerges. VMware is talking about cross-cloud management. VMware is also talking about cloud-native applications. VMware is also talking about how you go from the data center to the cloud. VMware is talking about how to transform into a hybrid cloud. It appears there is an interesting thread throughout this. We have read about the winners and losers in VMware’s new approach, but what most are missing is that there are no real losers. There are just winners. It depends on your mindset. The Achilles’ heel of IT is not hybrid cloud, but scale. How do we scale up our applications fast enough to handle the new IT? The approach VMware is taking is a major pivot for it. Let us look at some fundamentals.
Articles Tagged with AWS
Last week was Oracle OpenWorld. It was held in San Francisco at the Moscone Center, which surprised me. I had thought it was closed for refurbishment, as this was the reason VMware had given for holding its annual US shindig in Las Vegas this year.
It seems like Oracle must always have a public enemy number one. Those of you with long enough teeth will remember spats it has had over the years with Microsoft and, more recently, Google and HPE. Well, it seems that Oracle has a new public enemy in its laser sights, and that is Amazon Web Services (AWS). The OpenWorld keynotes proclaimed that Oracle is now a real cloud player and the fastest growing cloud company out there. However, according to The Register, even the usually docile and compliant conference attendees were quite vociferous in denying this.
AWS has introduced a new way to consume its Amazon WorkSpaces cloud desktop service: desktops by the hour. The new service is designed to appeal to businesses with employees needing only occasional computer access and should allow many customers to reduce their costs, although buyers will need to pay close attention to how the service is used. Misconfigure it or underestimate the hours to be used, and you could see an increase in your existing bill.
How much is your data worth? How much does it cost to store your data? I doubt that you have numbers for either of these things. But maybe it is time to start thinking carefully about both of these numbers. If the cost of storing your data exceeds its value, then you probably shouldn’t be storing the data. The trigger for this thought is the oncoming Internet of Things (IoT) data tsunami. A few guestimates I’ve seen suggest that we will see around 50 Zettabytes (ZB) of data generated in the next five years. That is 50 Million Terabytes. One place to store 50ZB of data is on AWS’s cheapest storage, Glacier. At the cheapest published price for Glacier, your monthly bill would be over half a billion dollars. I wonder whether knowing the temperature inside my refrigerator every minute of the day is worth that much money.
There seems to be a trend of providers abandoning the commodity public cloud market. We saw HP exit its Helion Public Cloud, and more recently, Verizon shut down one of its Infrastructure as a Service (IaaS) products. At the same time, we see Amazon and Microsoft heavily committed to public cloud and making a lot of money. I think there is a fundamental difference between what the successful cloud providers and the commodity VM providers offer. The big difference is that successful cloud providers sell mostly non-commodity services. They sell services that are not available elsewhere. The value proposition for AWS and Azure is not really in running your VMs. It is in offering services that your applications, or users, can consume. These cloud services are consumed by application developers: information systems people rather than information technology. They lock in customers by delivering unique and valuable services. They have a low cost of entry to entice customers and a high cost of exit to retain customers.