The truth about Serverless

Gareth Thomas
9 min readJul 6, 2020
Photo by Kristina Flour

If you are in technology or utilise technology for your business, you can’t help but be barraged by the benefits or pitfalls of serverless. Over the last twenty years, we’ve seen (almost on an annual basis) some new technology being touted as the next “big thing”. Techies generally love the latest and shiniest new thing, from client-server to cloud and now hybrid cloud, and from SaaS to PaaS and SQL to NoSQL and back again.

This article will look at whether the claims being made about serverless technology really stack up, and how a business might justify its use.

What is Serverless?

First a primer. Serverless is, in fact, an oxymoron; running applications “serverless” still means they are being run on physical servers somewhere in a data centre. Until the mid-noughties, if you wanted to launch a web application, you would use a physical server located in your office or data centre and install your application directly on that server. The server you owned might also be installed in someone else’s data centre called co-location, or it might be a server you rented from the hosting company, or even part of a server you shared with others. The point is, your application executed on what is now called “bare metal”.

Then, in 2006 Amazon set up a subsidiary called Amazon Web Services (AWS) and launched Elastic Compute Cloud (EC2). This utilised a technology called a hypervisor to provide a software layer (termed an abstraction layer) on top of the physical hardware . So now, when you “rented” a server, you got a virtual server configured via software to the specification required. This virtual server utilised custom resources that might span multiple physical servers. The Cloud was born.

The beauty of this model was a massively reduced cost (of both time and money) to get an idea up and running. No physical hardware to buy and set up, you only had to configure what you needed with no long-term contracts. The result was an explosion in tech startups.

The next big change to come was containerisation. Popularised initially by Docker, this was an attempt to solve an issue with the deployment and configuration of servers, which added growing complexity by way of additional requirements needed to allow the application to run on a server. This also meant that a single virtual server could hold tens or even hundreds of mini-servers running as containers, talking to each other if needed. The rise of containers also coincided with the concept of microservices, which accelerated its popularity.

But there was still the issue of configuring and deploying servers and containers to install your application on. And for an engineer, a growing pain point is the “stack” of software required to run a typical application. Because modern web apps are highly complex beasts with many layers, the stack has become extensive. This has also led to the creation of DevOps as a discipline, which offloads this complexity to other specialists on the engineering team.

Serverless to the rescue!

The idea was to boil an application down to one of its simplest components: a function. Most applications consist of thousands of these that take values in and return results back. At a simple level, think of a calculator that takes in three values, two numbers and an operator, and returns an amount. Serverless allows us to build an application by simply defining the functions that encapsulate the required business logic. There’s more to it than that, but the biggest take away at this point should be that you only pay for precisely what you need: a small piece of code when it runs. There is no server configuration, no containers to define, no stacks to install and configure.

This is a big deal — a really big deal. Containerisation moved things forward a step but serverless is a paradigm shift, rather like the Cloud was back in 2006, and that’s no exaggeration.

What are some of the perceived pros?

  • Faster build to release

Engineers only need to focus on the application design and build. There is no need to worry about server provisioning and setup or backend configuration. This also makes it possible to quickly update, patch, fix or add new features. It is not necessary to make changes to the whole application; instead, engineers can update the application one function at a time.

Reality: Yes but a qualified yes. Certainly, removing a lot of deployment complexities and dependencies enables development to move faster. But it requires engineers to now be architects and not all are — or want to be. The skill set has fundamentally changed and the utilisation of technologies like CI/CD (continuous integration and continuous deployment) have now become essential to reap the benefits.

  • Reduced application costs

You are only charged for what you use. Code only runs when functions are needed by the application and it automatically scales as needed. Provisioning is dynamic, precise, and in real time. AWS, for instance, bills in 100-millisecond increments.

Reality: Yes but also a qualified yes. I’ve seen cost reductions of 15–20% but there are cases of reductions of as much as 90%. It is absolutely the case that you are only charged for what you use, but depending on your business, how you need your application to work and the usage placed upon it, you might need to make design decisions that could significantly increase those costs. For instance, AWS Lambda can have a certain amount of latency — being slow to respond. However, this can be mitigated by keeping things “warm” or ready to execute more quickly. But there is a cost to that.

  • Increased application flexibility and scalability

Applications scale automatically with increasing users or usage. If functions need to run across multiple instances, the vendor infrastructure will manage them as required. As a result, an application will be able to process an unusually high number of requests just as well as a single request from a single user. A traditional application with a fixed amount of server resources can be overwhelmed by a sudden increase in usage.

Reality: Yes, in general, this is true. Using a platform as a service (PaaS) solution such as App Engine on Google Cloud or Elastic Beanstalk on AWS can handle sudden demand increases extremely well.

  • Reduced cost of management

Serverless architectures allow you to spend less money and time managing servers because most of the work is done for you by your cloud computing provider.

Reality: Yes, but there are caveats. First, the burden of systems architecture has now shifted to the application engineers, not all of whom are suited to this task (as mentioned above) and second, with cloud services constantly evolving and improving, changes that completely break an application are possible and could take a lot of time and money to resolve.

What are some of the perceived cons?

  • Vendor lock-in

By building your application so tightly into a big cloud service provider’s platform, you are clenched tightly to their bosom with no easy opportunity to move and will be subject to unannounced price hikes etc.

Reality: No. I could write a long answer to this one and my first riposte would be “opportunity cost” but this article nails it so well so please read it: https://lumigo.io/blog/you-are-wrong-about-serverless-vendor-lock-in/

  • Systems architecture skills needed by application engineers

While serverless does boil an application down at its basic level to a function, there are still other supporting components and services that need a deep level of understanding and configuration.

Reality: Yes. Architecture must first be considered and planned; but not only that, launching an application now requires more than just knowing how to code a backend application and put it on a server.

  • Cost of experienced engineers

Finding engineers with experience in these new technologies is time consuming and expensive, especially as the demand for their skills increases. The existing engineering team will require upskilling and that will also take time and cost money.

Reality: Yes. But the result should be a reduced requirement to manage infrastructure, meaning less need for DevOps engineers (pricey in today’s market!). It might even be possible to eliminate DevOps and distribute that workload over the existing engineers. This is likely only to be something affecting much larger teams, but something to bear in mind if you are growing fast.

  • Application latency

Latency is how long it takes for an application to respond to a request. Generally serverless functions are extremely fast with the average time — depending on language used — of less than 20ms. However, an issue can occur with what is known as “cold start”. Because of how serverless works under the hood, its principle of stripping applications back to their smallest component means that how that function runs — if it’s not called that often — might mean it is much slower the first time it is used. Depending on your application, this could have significant ramifications.

Reality: Yes. However, AWS has made and is continuing to make significant improvements in this area as they understand it is a weakness. Also, there are ways to reduce the impact in your architecture, but it should always be considered in any design phase. Some tips are:

From a great blog here: https://www.serverless.com/blog/when-why-not-use-serverless

  • Not suitable for long-running tasks

A function cannot run for longer than a few minutes and is therefore not suitable for many applications.

Reality: No. On AWS, they can run for upto fifteen minutes and on Azure, an hour with a premium plan. Only GCP imposes more reduced restrictions. I would argue though, that if this is a limitation for your application then serverless is not for you. There are other options between serverless and running your application on a virtual machine that can resolve this issue. One option is to design your application in such a way that longer running tasks can be handled separately.

My business uses a mainframe/servers/regular cloud/PaaS etc. Should I change?

Before you decide to throw away an application you already have and replace it with something new, serverless or not, this famous article from Joel Spolsky is always worth a read: https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

Granted, that article is twenty years old and a vast amount has changed in terms of how software is built and tested, but many of its central points remain. Wholesale replacement should always be very, very carefully considered. There is a reason why many banking systems still run COBOL applications written in the ’70s (https://thenewstack.io/cobol-everywhere-will-maintain/).

If you are starting from scratch, you should definitely evaluate a serverless approach. If you have an existing application, however, going serverless is going to be more complicated. The most important first consideration is how much test automation you have in the form of unit and integration testing. If the amount of test coverage you have is high, then you can have much more confidence in replacing it, perhaps in a piecemeal fashion. That way, the whole thing won’t break if there is a significant problem.

For much older applications, or those with no test coverage, the decision becomes even more complex. One option might be to build a proof of concept and use it to handle a small subset — if that can be identified and isolated — of customers or data to test it on.

If you are on a mainframe, that decision is going to be the hardest of all. The enormous raw processing power and bandwidth of mainframes is going to be hard to replicate in a cost-effective fashion by going serverless. However, there are examples of companies moving to the Cloud using containers and non-serverless stacks, which allow them to maintain languages unsupported by serverless, and therefore not have to deal with the risk of rewriting critical business logic: https://aws.amazon.com/blogs/apn/migrating-a-mainframe-to-aws-in-5-steps/

If you determine that serverless might well be the right approach, the next thing to do is perform a cost estimate. There are a lot of calculators out there for the various providers, so I won’t provide a link here (just make sure you pick an up-to-date one) However, to give you an idea of how this might look for AWS, this is a useful article: https://www.simform.com/aws-lambda-pricing/. One point to note, in the example they give, API requests are the biggest cost — not function execution time. Then, there are other studies such as this that show cost effectiveness compared to virtual servers declines significantly at much higher request levels: https://www.bbva.com/en/economics-of-serverless/. The point to bear in mind though, is don’t just look at pure hosting costs. This is only one measure of the overall TCO.

To conclude, wherever you are in your application journey and whatever you decide to do, one thing is clear: it’s easier, quicker, and cheaper than ever to develop and test an idea using serverless technologies, and sometimes that’s all that’s needed to validate a direction of travel.

--

--