Like Cloud and Virtualization, Serverless Computing Is Still Someone Else’s Computer

Today, serverless is all the rage. In the beginning, we had the server. Then along came virtualization, and things were good. We saved money. We could purchase less tin but run more servers. We could easily see the benefits of moving in that direction: lower power requirements, less hardware needing cooling down in our computer rooms. This was an easy sell for engineers and salespeople alike. Techies loved the elegance, and the business types loved the financial savings. The messaging was easily understandable.

The benefits were easy to quantify. In the early 2000s, when virtualization hit the market, an average server for running a web server was over $5,000. Worse, a web server barely touched the surface of the resources of the box that was running on it. Running four or five servers (remember, we are talking 2003 here: single core dual CPU servers) on the same machine meant that you had verifiable savings: I mean five times $5,000 against $10,000 and an ESX license showed a direct bottom-line benefit. We could do more with less—a quart in a pint pot—and things only got better as consolidation numbers increased with the release of more powerful CPUs’ greater cores, faster networking, and denser memory modules.

Then around 2008 came the next revolution, that of cloud computing. It was virtualization on steroids: automation; multi-tenancy environments; public, private and hybrid cloud arguments; pay-as-you-go charging; and charge-back. Things started to get complicated. Cloud was and still is a much harder sell. Many people were confused; in fact, many still are. The “’aaSes” (anything as a service) exploded—IaaS, PaaS, SaaS, DRaas, DaaS, and finally, XaaS. This led to further muddling of the waters that were still swirling from the virtualization revolution. We started to move workloads out of our owned data centers and into data centers we didn’t own. It wasn’t the case that we had never done this before. We simply no longer called it co-location but rather cloud computing. Remember ASPs (Application Service Providers), anybody? There is nothing new under the sun. Cloud was a marketing construct, not a technical advance. Cloud is really just a virtualization use case, more a Virtualization 2.0.

Next, we had another revolution: that of containers, which would deliver to us the nirvana of platforms and cloud-native applications. That said, I have not seen that much evidence of this outside the ivory tower that is Silicon Valley. Companies like Docker and CoreOS with its rkt technology reinvented the wheel and relaunched Solaris Zones on Linux. In lay terms, this is sort of application virtualization for Linux. It makes sense on a technical level, and the business gets and understands it. Now we had virtual machines for legacy applications and containers for new cloud-native applications. Then, Docker and VMware went and muddied the water further with Linux Kit on Docker and VMware Integrated Containers on vSphere. It is nothing short of confusing for techies, never mind the business. As each layer of encapsulation is added, the value seems more distant and the need more nebulous. Perversely, the valuation of the companies providing these services seems to rise exponentially. Again, the vast majority of this is marketing led.

Today, there is a new term in town: serverless computing, or to give it is correct name, Function as a Service. This is a new cloud execution model where the level of encapsulation has moved so far up the stack that the cloud provider manages the starting and stopping of a function’s container on a Platform as a Service (PaaS) as necessary to service requests. Those requests are billed by an abstract measure of the resources, CPU, and memory that are required to satisfy the request, rather than the more normal per virtual machine, per hour, or amount of storage consumed model. With serverless, you effectively write your code directly to an API, and the cloud provider runs the function. No server, operating system, or container is needed, hence the term “serverless.” Now, although this is understandable from a technical perspective, the term is causing chaos in the business environment.

CommitStrip.com comic strip
Serverless: it’s still somebody’s computer

When are marketing departments going to learn that less is more? The term “serverless” is both confusing and duplicitous. The environment is far from serverless; in fact, it is extremely complex from the perspective of the cloud provider. It is just another type of cloud computing, a further elevation of encapsulation away from the underlying hardware, encapsulating the operating system and the application, too. It is as nebulous a concept as possible. Processes like Lamba from AWS, OpenWhisk from IBM, and Azure Functions from Microsoft are fully featured environments that expose their APIs for their customers to develop against using languages such as Python and Java running against Node.js as an invoker.

This is too much. Things are moving too fast now. Technologies are not even getting the chance to be installed before they are termed “legacy” by company marketing teams touting the next big thing. The fact is that in the real world, people are still doing Virtualization 1.0, perhaps moving to cloud, either public with AWS or Azure, cloudifying their on-site virtualization clusters, or perhaps deploying a hybrid cloud model. I will leave you with one final thought: serverless is still running on somebody’s computer.

Posted in Transformation & AgilityTagged , , , , , , ,

Leave a Reply

Be the First to Comment!

wpDiscuz