The main advantage of cloud serverless is the ability to only pay for what you are using — you turn it on and you’re only paying when you or your customers are using it.
As we know, there is still an actual server there somewhere, it is just that you’re not managing it and you’re not responsible for the run time. So all you have to do is get the development team to write some code and a compute function and it will run it for you.
The traditional server model requires you to purchase server capacity to cope with your peak demand. Even if you only hit that peak demand once a year, you need to be able to cope with it when it hits, so you have to purchase hardware for that capacity year-round.
But this doesn’t mean that serverless is the best or cheapest solution in every situation. If you use it in the wrong instance it can actually be more expensive and result in a slower user experience.
So what is the serverless sweet spot? And what are the latest architecture trends around serverless? I spoke to one of our principal consultants Matt Fellows to find out.
Penny: To understand the serverless sweet spot, maybe we should start with what the sweet spot is not. Do you have any real life not-the-sweet-spot examples?
Matt: There was a project I was at where one of the execs made the call that a certain initiative had to be serverless (i.e. Lambda). It was an existing service that was to be migrated from an on-premise physical server.
You could see the problems that were going to happen in advance. Over one million customers used their mobile app on any given day, and this particular API was used in all sessions — so it was really high frequency call. The traffic was very predictable and was part of a synchronous call chain and it connected to a database. Those characteristics — predictable traffic, high-frequency and time-sensitive, connectivity to a database and synchronous operation — meant that it was not a suitable workload for Lambda.
Firstly with Lambda you pay per execution. Per millisecond of execution. Which on a small scale is very affordable, but once you start hitting certain workloads it can become expensive.
The next reason is around the Lambda runtime scaling properties — as the function scales out or is woken up, it would need to go out and establish a new connection to the database and once the connection is established the actual function can do it’s thing. This is an expensive thing to be doing at scale. It’s worth noting that this is less of an issue when talking to cloud-native databases, like Amazon DynamoDB.
A traditional server approach would create that connection and leave it open for re-use, becoming more efficient with pooled connections over time — something that can’t be done in a single-threaded function.
So that means its a bit slower. In fact, if this function is within a VPC, other networking resources need to be created at scale time, which can take up to a few seconds. So you’ve got scalability, however at the cost of latency. Because the Lambda runtime is not part of the shared responsibility model, it’s harder to optimise these aspects of the solution.
Four seconds is a long time to wait for today’s mobile customer. So for this situation serverless was very expensive and very slow, and as it turned out, quite expensive.
Penny: What could they have done instead?
Matt: What I advise clients is to look at the new things you are doing, and assess whether it could be a good candidate for going serverless — but you should look to build it in the new way of thinking. Don’t necessarily try and take existing apps and just drop them into a serverless world because it won’t always work. When we first starting moving applications to the cloud, we learned the problems with the “lift-and-shift” approach — we didn’t gain the benefits of scalability, resilience, performance and so on. Only when we started to build “cloud native” applications did we truly get the benefits. It’s the same thing for serverless — there’s a paradigm shift that you need to grasp at an architecture level.
Penny: What challenges does serverless present?
Matt: With serverless you have more things. We’re seeing microservices being split up into many functions, with a REST or other interface at the top, orchestrating a bunch of asynchronous functions. So each thing that you are deploying is much simpler but the complexity of the overall system is increased. So now it becomes about observing that system and seeing how it behaves. Tracing stuff through the system at that scale is quite interesting.
Also, you can’t run serverless things locally. So if you want to get environment parity in your local development environment, the only way to properly test that is to run it on the cloud provider’s platform, such as AWS in the case of Lambda. You can’t replicate that environment any other way, and so vendor lock-in is a potential challenge.
This has introduced new challenges in how we test both at the function level, and the system level. I discussed a number of approaches to grabbling with these challenges in the talk I gave at AWS (Testing large-scale, serverless and asynchronous systems), and have been working with the community in improving tools like Pact to support functions.
Penny: What about flow-on effects or benefits?
Matt: From a product point of view, developers would previously be doing all this stuff that was behind the scenes.
We used to have to write code and manage the servers and upgrade the servers but now we can write code and make a change. More of this undifferentiated heavy lifting is being removed, reducing time-to-market.
Penny: So what are the architecture trends surrounding all of this?
Matt: In the old world we had the classic pace-layered architecture — a website talking to some APIs talking to some old school middleware talking down to mainframes, and things and all requests had to traverse all the layers.
All the useful data was locked away somewhere and there was a team who managed that. With these newer, asynchronous and decoupled architectures we’re seeing that change.
We still have our front end that makes an incoming API call, however data and updates flow in through multiple-paths and are shared back out to different parts of the system. Our application gets its own view of what it needs, as do others; everyone gets their own version of the data. They can combine with other data sources to make faster or more intelligent queries, do machine learning on it — they can manipulate it in a way that benefits them whilst not mastering it. This means teams can move faster with their own use cases, without being bottlenecked through traditional application development cycles.
In new organisations, data is being democratised.
Things are also changing around automation of IT operations . In the old days there were all these cron jobs and scripts running on the servers, but what we’re moving to now are intelligent OODA-inspired functions that raises traditionally second-class citizens like scripts and jobs, to first class functions in the tech stack. The benefit of this, is that we can now apply the usual, rigorous engineering practices like TDD (test driven development) and continuous integration pipelines to something that was previously more difficult to do.
There’s also change to how microservices are designed and implemented. There is still an API at the front (e.g. REST), but under the hood there are many deployed functions doing the grunt work, that was previously performed by one microservice, and this orchestration is now happening offline and asynchronously.
For example, in traditional approaches to creating an order, an API call would happen synchronously to process it, whilst the customer is waiting in line. Now all of that happens offline with event driven architectures, and we update the order model as we get events fed back in from the broader system.
This is essentially the serverless sweet spot: event-driven and loosely-coupled architectures that scale with demand.
Matt Fellows is a principal consultant at DiUS and an AWS Developer Warrior. He can often be found speaking on subjects like this at Developer Warrior Meetups or tech conferences. He also speaks regularly on Pact and will be running workshops and training on Pact in San Francisco and Las Vegas later this year. Get in touch if you would like to have him speak at your event.