Cloud Native 13 min read

Why the Serverless Revolution Has Stalled: Promises, Challenges, and Reality

Despite early hype that serverless computing would usher a new era of scalable, cost‑effective applications, this article examines its historical roots, unfulfilled promises, and four key obstacles—including limited language support, vendor lock‑in, performance issues, and inability to run entire applications—explaining why the revolution has stalled.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Why the Serverless Revolution Has Stalled: Promises, Challenges, and Reality

Server Is Dead, Server Lives!

This is the rallying cry of the Serverless revolution. A quick scan of recent industry news suggests that the traditional server model is obsolete and that we will soon be running entirely on Serverless architectures.

However, anyone working in the field knows that the reality is far from this hype. As noted in articles about the current state of Serverless computing, many of the touted advantages have not materialized, and recent research indicates that the revolution may have already stalled.

While some promises of the Serverless model have been realized, they fall short of expectations.

This article explores why, despite Serverless’s usefulness in specific, well‑defined scenarios, its lack of agility and flexibility remains a barrier to broader adoption.

Promises of Serverless Computing

Before discussing the problems, let’s review what Serverless is supposed to deliver. In simple terms, Serverless computing is an architecture where applications—or parts of them—run on a remotely hosted execution environment on demand. It can also be hosted internally.

In recent years, building resilient Serverless systems has become a focus for system administrators and SaaS companies because the architecture allegedly offers several key advantages over traditional server‑client models:

Developers can write generic code and upload it to a Serverless framework without maintaining an operating system or building OS‑specific applications.

Resources are billed per minute (or even per second), so customers only pay for the actual execution time, unlike traditional cloud VMs that charge for idle capacity.

Scalability is a major draw; resources can be dynamically allocated to handle sudden spikes in demand.

In short, Serverless should provide a flexible, cheap, and scalable solution—yet the idea has not been adopted as quickly as expected.

Is This a New Idea?

The concept dates back to 2006 with Zimki PaaS and Google App Engine, which explored similar frameworks. In many ways, Serverless predates what is now called “cloud‑native” technology.

It is important to note that Serverless is not synonymous with Functions‑as‑a‑Service (FaaS); FaaS represents the compute‑centric part of Serverless architectures and cannot represent the entire system.

The hype is driven by the rapid growth of internet penetration, which increases demand for compute resources, especially in regions lacking traditional infrastructure. Serverless platforms fill that gap.

Problems with Serverless

While Serverless offers value in certain contexts, the claim that it will quickly replace traditional architectures is unrealistic. Four major issues hinder its wider adoption:

Limited Programming‑Language Support

Most Serverless platforms only support a subset of languages, restricting flexibility. Although major providers add wrappers for other languages, performance penalties often negate the cost‑saving advantage of running rarely used languages.

Vendor Lock‑In

There is little operational similarity between platforms, and no standardization for how functions are written, deployed, and managed. Migrating functions is relatively easy, but moving the surrounding services—storage, identity, queues—is costly and time‑consuming, contradicting the promised portability.

Performance

Performance metrics are often opaque. Empirical evidence shows cold‑start latency and hidden storage tiers can degrade performance, and providers rarely disclose these details. Optimizing functions for a specific platform can undermine the “agile” claim, while keeping functions warm incurs additional cost.

Inability to Run Entire Applications

Serverless is generally unsuitable for running whole monolithic applications cost‑effectively. It works best for new projects where the architecture can be designed around functions, rather than retrofitting existing systems.

Consequently, many organizations use Serverless as a supplement to internal servers for compute‑intensive tasks, rather than as a full replacement.

Is the Revolution Long‑Lived?

The author does not oppose Serverless solutions themselves but urges developers—especially newcomers—to recognize that Serverless is not a direct server replacement. Instead, they should consult resources on building Serverless applications and choose the deployment model that best fits their needs.

About the Author

Bernard Brode is a product researcher at Microscopic Machines, fascinated by the intersection of AI, cybersecurity, and nanotechnology.

References

Original article: https://www.infoq.com/articles/serverless-stalled/

Bernard Brode profile: https://www.infoq.com/profile/Bernard-Brode/

Serverless Computing State: https://www.infoq.com/presentations/state-serverless-computing/

Serverless Revolution article: https://datafloq.com/read/7-reasons-serverless-computing-revolution-cloud/2871

Recent research: https://www.infoq.com/news/2020/06/oreilly-cloud-report/

Serverless Revolution promises: https://radon-h2020.eu/2019/10/11/serverless-revolution-radon/

Resilient Serverless systems: https://www.infoq.com/presentations/serverless-resiliency-infrastructure-as-code/

Zimki PaaS slides: https://www.slideshare.net/swardley/zimki-2006

Observations on Serverless: https://www.appdynamics.com/blog/product/the-serverless-revolution-is-here/

Internet penetration statistics: https://www.internetadvisor.com/key-internet-statistics

Community standards for containers: https://www.itprotoday.com/containers/open-container-initiative-sets-container-distribution-standard

Performance evidence: https://www.infoq.com/articles/serverless-performance-cost/

Microservices to Serverless: https://www.infoq.com/presentations/microservices-to-serverless/

Building Serverless applications: https://www.infoq.com/articles/understanding-serverless-servicefull-applications/

PerformanceServerlesscloud computingscalabilityvendor lock-in
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.