Traditional IaaS cloud—whether AWS’s EC2, Azure’s offering, or even a private IaaS cloud running vCloud Director, vRA, or OpenStack, to name a few—is in trouble. Now, that sounds like quite a contentious statement to make, but I feel the writing is on the wall. “What?” you may ask. “How can you say that? There are many companies that have not even started their cloud journey, and surely IaaS is the first baby step in their travails.” Well, the answer to this is “yes and no.”
Early movers headed out on their journey unprepared, bright-eyed and bushy tailed, walking into their cloud migrations thinking only of up-front cost savings and believing the patter of the snake-oil salesmen. What is worrying is that, according to an IDG and Datalink survey in 2016, up to 40% of those early adopters have had buyer’s remorse and returned to their cozy data centers or colo sites. Why? Traditional IaaS is expensive. Moving to an infrastructure only–based cloud is very expensive, and companies are used to being always on. They are comfortable with instant access to their data from anyplace, at any time, from effectively anywhere. You really can not move to a subscription-based cost model on that basis.
Work needs to be done with regard to your environment and what access is really required. 24/7? Or is it really eight hours out of twenty-four for 90% of your users, with only 10% needing access 24/7? What about data backup? Technologies like DPM from VMware could be used to greater effect in a public cloud, but in the early days, most migrations were lift and shift. This made sense for data center migrations and colo moves, but not for a cloud migration.
The very same cost model that makes cloud attractive to customers is the Achilles’ heel of IaaS clouds. Many a company’s CFO has had a minor heart attack at quarter end when investigating cloud usage charges. In fact, it is estimated that there is approximately $6.4B in wasted costs to consumers annually via AWS alone. No wonder Jeff Bezos is the world’s richest man by a country mile, and no wonder why a move to PaaS and SaaS in the form of cloud-native applications is not only more beneficial to a customer, but the obvious final send state for applications.
Contrary to popular belief, the rise of cloud-native applications is not going to become the death knell of on-site data centers or private cloud instances. In fact, cloud-native applications, driven in part by containerization and serverless function processing, will allow a separation of data and processing that has never really existed until recently.
Data is a company’s crown jewels, its lifeblood. In some ways, it is more important today than cash flow. Without data to drive-decision making and understanding the needs and drives of customers to enable the sales process to move, cash flow will stall. With today’s need for data to be highly available, placing it in a location for which cost is based on utilization could be misguided. Perhaps a better method is to have your data local in your bastioned on-site data center. Have your data in motion cached to the cloud for manipulation. Consider this the opposite of traditional storage tiering solutions utilized in data-protection products, like Rubrik and Veeam Cloud Connect, where cold data is migrated out to low-cost S3 or Azure Cool Blobs. Here, data in motion would be stored on high-speed SSD or NVMe storage near to the application, and warm data would be stored back at the local site.
What exactly would this give, other than a reduction in data storage costs? True, storage is cheap in the cloud, but egress or movement between tiers is not. Moving data from warm to hot invokes a charge from the cloud providers, and to get the performance desired for most analytical programs requires fast access times, so either in-memory or SSD/NVMe tiers. By keeping your data on-site and only moving the data needed, there is only a single charge for movement. The vast majority of public cloud providers do not charge for ingress, only egress between tiers. By keeping your data local, you only pay the egress charge once.
What is the point of this? IaaS is expensive if done as a traditional lift and shift. Hopefully, we have moved on from this in consulting land. Consultants are relearning old skills, right-sizing compute resources for the task at hand. They are also asking questions about business practices: When are core compute hours? What services actually need to be active 24/7? What can be easily reengineered? Can that SQL database be migrated to an Azure SQL instance in the PaaS or perhaps AWS RDS? What about directory services? Why not utilize Azure AD coupled with a local AD connector instance, rather than physically extending your domain into your Azure stack?
There are many cost-saving benefits to a little old-school planning, but I still think that traditional IaaS is a dead man walking.