VMware’s VMworld conference season is now over. Its Barcelona shindig has just finished and everybody has flown home, is flying home, or is winding down on the beaches of the Catalonian coast pending the upcoming OpenStack summit. I did not attend the Las Vegas event; however, from what I have gathered from speaking to folks who attended and from reading about it, it was not well received. Complaints included a lack of new releases and what at first glance appeared to be muddled messaging and poor keynotes. However, fast-forward to VMworld Barcelona, and you could not have had a more night-and-day moment.
Historically, VMware’s European conference has been lackluster ever since it was moved from its original late-February slot to its current Autumn resting-place, October. The larger US conference had a larger audience, lasted longer, had all the important new releases, and got first shout at the keynotes. Not this time. VMworld Barcelona was extended by an extra day, and more importantly, it got all the major announcements: vSphere 6.5, VSAN 6.5, vRealize Automation and Operations, a new version of Log Insight, and the biggie, vCloud on AWS. Further, rather than being able to sit in the hang space and mouth out the keynote in time with Pat’s speech, Europe got brand-new keynotes.
As we hear more about VMware, AWS, and IBM, a new story emerges. VMware is talking about cross-cloud management. VMware is also talking about cloud-native applications. VMware is also talking about how you go from the data center to the cloud. VMware is talking about how to transform into a hybrid cloud. It appears there is an interesting thread throughout this. We have read about the winners and losers in VMware’s new approach, but what most are missing is that there are no real losers. There are just winners. It depends on your mindset. The Achilles’ heel of IT is not hybrid cloud, but scale. How do we scale up our applications fast enough to handle the new IT? The approach VMware is taking is a major pivot for it. Let us look at some fundamentals.
Last week was Oracle OpenWorld. It was held in San Francisco at the Moscone Center, which surprised me. I had thought it was closed for refurbishment, as this was the reason VMware had given for holding its annual US shindig in Las Vegas this year.
It seems like Oracle must always have a public enemy number one. Those of you with long enough teeth will remember spats it has had over the years with Microsoft and, more recently, Google and HPE. Well, it seems that Oracle has a new public enemy in its laser sights, and that is Amazon Web Services (AWS). The OpenWorld keynotes proclaimed that Oracle is now a real cloud player and the fastest growing cloud company out there. However, according to The Register, even the usually docile and compliant conference attendees were quite vociferous in denying this.
AWS has introduced a new way to consume its Amazon WorkSpaces cloud desktop service: desktops by the hour. The new service is designed to appeal to businesses with employees needing only occasional computer access and should allow many customers to reduce their costs, although buyers will need to pay close attention to how the service is used. Misconfigure it or underestimate the hours to be used, and you could see an increase in your existing bill.
In a previous article, I suggested that splitting DaaS into separate parts for the broker and the desktops would address some of the challenges of DaaS. Today, I’d like to take a closer look at how this might work.
How much is your data worth? How much does it cost to store your data? I doubt that you have numbers for either of these things. But maybe it is time to start thinking carefully about both of these numbers. If the cost of storing your data exceeds its value, then you probably shouldn’t be storing the data. The trigger for this thought is the oncoming Internet of Things (IoT) data tsunami. A few guestimates I’ve seen suggest that we will see around 50 Zettabytes (ZB) of data generated in the next five years. That is 50 Million Terabytes. One place to store 50ZB of data is on AWS’s cheapest storage, Glacier. At the cheapest published price for Glacier, your monthly bill would be over half a billion dollars. I wonder whether knowing the temperature inside my refrigerator every minute of the day is worth that much money.