Over the last few years, Serverless as an architectural pattern has made some noise. So much so that at one point I decided to go down the rabbit hole and give it a good look. Nearly 4 years since then, I have gotten to the point where I cannot build applications any other way; the advantages of a serverless application just so far outweigh any cons. I have also, during that time, spent a lot of time interacting with the Serverless community, trying to assist others in discovering this, frankly, revolutionary way to build software. So much so that Serverless, Inc, maintainers of the most popular Serverless application development framework, the Serverless Framework, asked me to join the team to do everything I had been doing part-time as a full-time job.
Now here I am, writing this blog post I should have written years ago, hoping to introduce other developers to the sheer level of productivity and performance building Serverless applications gives you. So instead of spending the first half of my post talking theory and history like so many others, let’s get straight into actually building a simple “Getting Started” application that anyone reading this can follow along with. Why? Well, conceptually Serverless seems very abstract. It’s only when you actually build something for the first time that you realize the true power of building applications this way.
First, let's get through the most annoying part. We will be building this solution on AWS, so if you don’t have an AWS account then now is the time to sign up for one. But don’t worry. What we will be building today should cost you the princely sum of $0 as AWS provides generous free tiers for the services we will be making use of and we will come nowhere near those limits.
To sign up with AWS, go to https://aws.amazon.com/ and click the big orange “Create an AWS account” button. Then just follow the instructions all the way to getting the account activated.
Awesome. That really was the most annoying part. Now onto the fun stuff. Let’s get ourselves set up with the Serverless Framework. To install just:
npm install -g serverless
Now we need to set up our first service and to do this we will use the brand spanking new onboarding experience. On the CLI, just enter serverless
and hit return. Then answer the questions like this:
-
No project detected. Do you want to create a new one? (Y/n):
Y
- We pick Node.js from the list
-
What do you want to call this project?: I am going to name mine
serverless-quick-start
- You can monitor, troubleshoot, and test your new service with a free Serverless account.: Free monitoring and testing … yes please
- Would you like to enable this? (Y/n): Hit Y
-
Do you want to register? (Y/n): If for some reason you have already signed up for a Serverless Framework account, select
n
, otherwise chooseY
*Then just provide some credentials for your new account.
Once you have run through the onboarding wizard to set up your new service and Serverless Framework account, enter serverless dashboard
into the CLI and you should see something in your browser like:
Serverless applications are usually made of multiple Serverless services, like the one we bootstrapped with the serverless
command above, each performing some specific task. Think microservices, but with less of the infrastructure headache … actually … none of the infrastructure headache.
Let's click on profiles
in the top left. You should have only one profile listed; default
. Click it and you should see:
In order for us to create our Serverless application, we need some way for our code and configuration on our local machine to get to our AWS account. If you expand the how to add a role
link, you should see a link for Create a role wizard
. Clicking that will open a new tab in your browser to your AWS account. At this point you just need to click Next through the wizard until you see a notification similar to this:
Click on that blue role name and on the next page you will see a line item labeled as Role ARN
. Copy that entire string that looks something like arn:aws:iam::1234567890:role/serverless-enterprise_serverless-quick-start
. Then go back to the console page in the browser we were on before and paste your ARN into the textbox:
Click save and exit
. Now, one last step to connect what we are going to build to this new account we just created. Go back to the folder in your terminal where we bootstrapped our service and open the serverless.yml
file in your favorite text editor.
This file is where we keep all the configuration we need to tell the Serverless Framework what to create on our AWS account. It is also where we can tell it to what organization and application to connect to on our Serverless Framework Enterprise account. To do this add the following(substituting your own details obviously) to the top of the file:
app: myapp
org: garethmccumskey
So what did we just do with the console? In order for us to be able to connect to our AWS account from our local machine, we need to get credentials with the right permissions to create things like Lambda functions and HTTP endpoints. Now that the service we are building is connected via the app
and org
to our Serverless Framework Enterprise account, when we issue a deploy command, a temporary set of credentials are created and passed back to our local machine and these credentials are then used by the Serverless Framework from our machine to deploy to our AWS account.
But we first need to create something to actually deploy. With the serverless.yml
file open let's make some more edits. Find the service
property and change this to some unique name for your new service. I am going to use serverless-quick-start
. Scrolling further down you can see we have a provider setup to be AWS (yes, the Serverless Framework can help you build Serverless applications on other providers like Azure, but we aren’t going to look at that this time), and we are going to use the Node 10 runtime for our code.
Scrolling past all the commented configuration, you should find a portion that looks like this:
functions:
hello:
handler: handler.hello
In order to build serverless applications, we use a FaaS (Functions as a Service) service that AWS provides, called Lambda. Lambda allows us to upload a single piece of code that gets triggered by an event we setup. Instead of me trying to explain all of this, let's build it and you can see what I mean first hand.
In our little demo, we are going to create an HTTP endpoint that returns “Hello World!”. Yup, I am entirely unoriginal, and we are doing a Hello World example. To that end, edit the configuration we saw before so that it looks like this (watch the indentation, YML gets a little angry if you don’t indent correctly):
functions:
hello:
handler: handler.hello
events:
- http:
method: get
path: hello
Now, let's open the file handler.js
and edit the content to look like this:
'use strict'
module.exports.hello = async event => {
return {
statusCode: 200,
body: 'Hello World!'
}
}
And with that, drop back to the terminal and enter in serverless deploy
.
NOTE: Since you may have registered a new account when you initially did
serverless login
you may need to doserverless login
again just to authenticate correctly if you get any error messages.
The deploy command should result in a bunch of stuff in your terminal like this:
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless Enterprise: Safeguards Processing...
Serverless Enterprise: Safeguards Results:
Summary --------------------------------------------------
passed - no-secret-env-vars
passed - allowed-regions
warned - require-cfn-role
passed - framework-version
passed - allowed-stages
passed - no-wild-iam-role-statements
warned - allowed-runtimes
Details --------------------------------------------------
1) Warned - no cfnRole set
details: https://git.io/fhpFZ
Require the cfnRole option, which specifies a particular role for CloudFormation to assume while deploying.
2) Warned - Runtime of function hello not in list of permitted runtimes: ["nodejs8.10","nodejs6.10","python3.7","python3.6","ruby2.5","java-1.8.0-openjdk","go1.x","dotnetcore2.1","dotnetcore2.0"]
details: https://git.io/fjfkx
Limit the runtimes that can be used.
Serverless Enterprise: Safeguards Summary: 5 passed, 2 warnings, 0 errors
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service serverless-quick-start.zip file to S3 (66.46 KB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................................
Serverless: Stack update finished...
Service Information
service: serverless-quick-start
stage: dev
region: us-east-1
stack: serverless-quick-start-dev
resources: 16
api keys:
None
endpoints:
GET - https://abcdefg.execute-api.us-east-1.amazonaws.com/dev/hello
functions:
hello: serverless-quick-start-dev-hello
layers:
None
Serverless Enterprise: Publishing service to the Enterprise Dashboard...
Serverless Enterprise: Successfully published your service to the Enterprise Dashboard: https://dashboard.serverless.com/tenants/garethmccumskey/applications/myapp/services/serverless-quick-start/stage/dev/region/us-east-1
Near the end of all that, under a section labelled endpoints
, a URL is provided (for example: https://abcdefg.execute-api.us-east-1.amazonaws.com/dev/hello). Go ahead and open that in your browser:
Soooo …. What just happened here? We created a Lambda function that could receive a GET request over HTTP to an endpoint. The only code we wrote to do this was a few lines long but we got a lot more back … Let's look at this in a little more detail to make it apparent how cool this really is.
The endpoint we now have is only accepting GET requests. We could make it a POST request and allow the function to accept data in the body. But that’s not all:
- The endpoint uses AWS’s API Gateway service which can handle up to 10 000 requests per second by default and can be increased to a higher value via a support request to AWS.
- When this endpoint received a request, it created a request object that was then sent to a small piece of code running on AWS Lambda that we wrote.
- AWS Lambda by default can run 1000 copies of that code simultaneously and that concurrency can be increased with a request to AWS.
- We are not paying for any of the code we store in AWS nor for the endpoints. The free tier on API Gateway allows for one million API calls per month before any billing happens.
- We are also not paying for any execution time of the code of our function. The AWS Lambda free tier allows for 1 million requests per month and 400 000 GB seconds. Since AWS bills by every 100ms of execution time, that means that our function could run for 800 000 seconds before we get billed. If we tweaked our configuration on our
serverless.yml
that we could even get 3.2 million seconds of free execution time. - Because of the way AWS designed API Gateway and AWS Lambda, we also get a fully redundant solution spread across three data centers (AWS regions always have 3 separate data centers separated by a few miles and connected via a dedicated fiber link). It would take a region-wide catastrophe to take our endpoint down, and even then it might still be up.
In the end, with this simple example, we configured and deployed a highly scalable, highly redundant solution that would be the envy of many a dev-ops practitioner; in about 15 minutes. And it costs us nothing unless we use it at volume. We did not have to provision our own servers (hence Serverless), we did not have to install operating systems, runtimes, fallbacks, backups, disaster recovery, load balancing. We don’t need to monitor CPU capacity and memory.
To put this into perspective another way, if you were building an application using Express or any other conventional web application framework and had to deploy this to virtual machines on AWS, to get equivalent redundancy and scalability you would need:
- 3 EC2 instances of t3.micro (the cheapest option), each in a separate availability zone in the region. This costs $0.0104 per hour each.
- A load balancer to help manage load across all three instances priced at $0.0225 per hour at a minimum.
The total cost of running the above comes to $22 for the three EC2 instances and $16.20 for the load balancer before any traffic has even made its way into that infrastructure. And don’t forget, to get this all set up probably took a few hours and needs to be maintained going forward; is there a critical operating system update that needs to be applied because of a new zero-day vulnerability that has been discovered? You are the one that needs to make sure the patch is applied.
And you more than likely need EC2 instances bigger than t3.micro’s. I would estimate that the average web application serious about serving traffic has to, at a minimum, have 3 t3.large instances which cost $0.0832 per hour. That means instead of $22, you would be spending about $60. Again, this is before any traffic even arrives. The point of a load balancer is that you can make it scale and spin up even more EC2 instances, adding to that bill.
In contrast, a serverless application costs $0 when idle (not $60 + $16.20), and scaling up and down is instantaneous. Don’t have any traffic at 1am when all your customers are asleep? Then why are you paying for anything?
And since we’re looking at the differences, on your command line run the following:
serverless logs -f hello
You get logging as a part of the entire solution. With our virtual machine equivalent we discussed that cost us nearly $40 - $80 per month, we don’t have any easy way yet to view our logs. That still needs to be configured. Which adds to the bill.
Go back to your Serverless Framework account in the browser, click applications
, expand your application open and you will see the service you just deployed listed. Open your service and feast your eyes on the detailed statistics about the service you just deployed; how many times it was executed, any errors, deployments, cold starts.
Granted this was a very limited example, but if you take a look at the wealth of documentation available on the Serverless website as well as the large number of examples posted by the community, you can immediately see that serverless applications are good for more than just tiny little GET requests.
All of this might be quite a bit to take in. And if your interest happens to be piqued, where to now right? Well, the Serverless Framework has some pretty good documentation to help get you started.
- Main documentation about the framework
- Examples to take a look at
- The blog that has a good collection of how to’s and use cases from real companies developing applications
If you have any questions at all about Serverless feel free to hit me up here or via Twitter. There is also the Serverless Framework community at the forums and the Slack Workspace.
Top comments (4)
Hi, at the beginning of this article, you write "Let's click on profiles in the top left. " - I'm in the serverless dashboard and do not see such an option. Could you provide some more detail on how to locate it? Thanks!
Hey there. The team is constantly making improvements. This has now moved to be under org settings so click the org name top right, then "org settings", then "deployment profiles"
Nice article! So lets back up a second... Are you saying you got hired to be a paid shill? Just kidding! That's awesome that the company recognized a community member of value and hired that person. Says a lot about them. Thank you for the quick guide and goodluck at the new gig!
Hey I used to be an unpaid shill :P
Seriously though, the framework is genuinely that cool and I would recommend all devs give it a try.