There are a lot of opinions on the subject of serverless, whether it has a place in 2023, and how traditional architectures are better, but I believe there are still good use cases for serverless, just as any tool that exists.
Note: A lot of what I will write about is in the context of AWS, as this is the system I am most familiar with, although, to a great degree, the same concepts can be applied to any other cloud provider.
After Amazon came out and said they are moving away from the serverless technology for their Prime service, many people thought that serverless was bad in a way and wouldn’t do a good job. The fact that Amazon decided to change its architecture and not use serverless in this specific service, doesn’t mean in the slightest that serverless wouldn’t do a good job in another case, another company, scale, etc.
When it comes to a startup, prototype, or system that can benefit from distribution, or PoC for example in my opinion the flexibility that a serverless and distributed architecture can prove is really valuable. You will not be forced to think about scaling and handling servers, while you can focus on the software solution you are building. Usually when required the cloud platform can scale your whole serverless infrastructure automatically so it would answer the increasing demand. In addition, you will not be forced to pay for capacity you do not need or are currently not using, for example, you had a standby VM, that you are paying for, but for some reason you not getting frequent enough traffic to justify it.
Serverless is not a one size fits all type of solution, its really not that great in many cases such as, but not limited to:
- API development
- Running jobs that will require time to finish - batch jobs, processing, etc.
- Tightly coupled systems
Cold start
There are several reasons why you wouldn’t generally want to go with serverless in these cases. Let’s start with the problem of the cold start. If you want to build an API, you will need for example an API gateway and Lambda (in the case of AWS) functions to take care of the logic. However, when the function was not invoked for some time, it starts to “sleep”, awaiting new requests. When the request finally arrives it takes some time for the cloud provider to prep the environment and dependencies for example. Which is definitely going to cause slow API responses in some cases. There are ways to mitigate this, but they almost always involve some cost. In most cases, traditional monoliths or microservices will be the way to go for APIs.
Invocation duration costs
Serverless functions are typically billed not only for invocation counts but for the duration of the processing, thus potentially increasing the cost significantly if the request takes time to process.
Additionally, it’s quite possible for the performance to suffer in these cases.
Cloud provider binding
Also let’s not forget, choosing to implement a serverless structure gets you stuck with a specific cloud services provider, which on the cons side can mean that you are less flexible in the future in case you need to change the architecture or the provider. Not to mention that, if the provider decides to change some of the prices and conditions for the services you will be using, this will directly impact your business.
Good use cases
However, if these concerns don’t mean much to you, you can benefit from a distributed or event-driven structure, you want to save from capacity costs when you start your business/project or you can make use of some asynchronous processing in your architecture (even if only a part of your system) I believe serverless can be of use to you.
Integrations
One of the best selling points to serverless is how the individual services can be easily integrated with each other. For example, uploading a file to S3 can trigger an automatic process in a serverless function and do some operations. In the same way, you can work with NoSql db (DynamoDb in particular), different streams of data, scheduled events to run jobs, asynchronous processing services such as SNS (Notification service) and SQS (Queue service), and many many others.
CDK and Infrastructure as code
It is really easy to deal with all of this infrastructure, which at first can be really intimidating, with a tool like AWS cloud development kit, and its derivatives like SST. With them, you can use code to define and configure all of the infrastructure you need, alongside the logic for the Lambda functions for example.
Serverless Workflows
Although I wanted to make this my main point, I decided to only graze the surface with what I intend to use automated serverless workflows for and go over some of the most suitable use cases.
With serverless workflows, or AWS Step functions in particular, you can automate different flows in your business, having the configuration you set up for the flow runs the entire logic.
With step functions, you can:
- Set up a sequence of lambda functions run in order, one after the other to perform some sort of processing on your behalf
- Add SNS events and run the flow only when an event is received
- Add logic branching based on parameters, for example, if you want to have a parameter in the request, where based on it being above a threshold you will route the request to a specific lambda, and in reverse, use another lambda if it does not meet the threshold
- Introduce manual action to the workflow by requiring human interaction to confirm an event for the state machine to continue
- A parallel process to run different operations on the load at the same time
There are many more functionalities to the step functions, than what I just listed. Traditionally you can create your own workflow inside your software, by implementing the state machine design pattern, achieving a good part of what this service provides. However, with the amount of inter-service integrations that are provided, flexible scalability, and relative ease of use there are definitely reasons for you to pick serverless workflows for your project.
I am just starting a new open-source project where I can showcase some of this functionality, and within a few weeks, I will be able to share some code. What I will do is create an automated pipeline for candidate profile prequalification (It’s just a test project). I want when a new candidate profile is entered into the system, based on a set of parameters, to potentially discard applications that are not a good match or return a candidate score.
For example:
- There will be a branching logic if the candidate doesn’t speak English, the profile can be discarded
- If the candidate is lacking a predefined amount of experience, we can require manual confirmation to continue the process.
- When the candidate is moved to the interview process, automatically send an email to them, and the interviewer
- Score the candidate based on checks we can implement with parallel processing
And many others.
This is going to be my use case of this technology, there are many others, where serverless workflow management can really excel like ML, Data processing, and different processing pipelines, but I will not get into them, as they are a separate matter.
Top comments (0)