DEV Community

Cover image for A couple of tips about writing and debugging Serverless - CloudFormation configs
Davide de Paolis
Davide de Paolis

Posted on • Updated on

A couple of tips about writing and debugging Serverless - CloudFormation configs

As soon as you start building serverless application, you need - for simplicity and for sanity - to start using some tool that allows to describe your stack with code. It could be Serverless Framework, AWS SAM, Terraform, you name it, but in a way or another, you will be writing the configuration of your project so that it can be easily deployed ( and version-controlled).

In our projects, we currently use Serverless Framework and I find quite straightforward working with it. Until something goes wrong and then you struggle with the yml ( or yaml ) file to find out why.

One of the main pain points for me versus YML is indentation. Most of the time when something stops working is because you added a space where it should not be, or you nested your structure in the wrong way.

For example, this works:

functions:
  myLambdaFunction:
    handler: src/index.handler
    name: my-awesome-lambda
Enter fullscreen mode Exit fullscreen mode

but this does not:

functions:
  myLambdaFunction:
    handler: src/index.handler
     name: my-awesome-lambda
Enter fullscreen mode Exit fullscreen mode

In 4 lines you might immediately see the mistake, but within a long file it could be tricky.

Luckily, as soon as you try to deploy it you will get a warning
Bad Mapping in yml
but sometimes, depending on the structure you are describing, the indentation could still be valid.

In fact, indentation is the only "marker" for the start and end of a "node" and though the indentation is formally correct you are accidentally modifying the structure of the nested object.

A couple of days ago it took me a bit to figure out why our iamRoleStatements were not applied anymore to our functions.
That was particularly tricky because when testing the lambda offline, everything was working since the credentials being used are the ones installed on your machine - therefore with very broad policies. As soon as the lambda is deployed though, only the policies and roles applied via yml are taken into consideration.

This is an extract of our configuration:

service: myAwesomeService
provider:
  name: aws
  runtime: nodejs10.x
  stage: ${opt:stage, 'dev'}
  region: eu-west-1
  securityGroup:
      //  many other lines with some level on indentation here 
  subnets:
      //  many other lines with some level on indentation here 
  vpc:
      //  many other lines with some level on indentation here 
  environment:
      //  some lines describing env variables
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
      Resource:
        Fn::Join:
          - ""
          - - "arn:aws:s3:::"
            - Ref: S3BucketRepository
            - "/*"
    //  many other lines with some level on indentation here 

functions:
  myLambdaFunction:
    handler: src/index.handler
    name: my-awesome-lambda
Enter fullscreen mode Exit fullscreen mode

Everything was working fine to work fine until we added a couple of custom properties in the recipe.

custom:
  bucketRef: S3BucketRepository
Enter fullscreen mode Exit fullscreen mode

Unfortunately, instead of adding those on the same level of provider and functions, custom was added just after environment ( but with the indentation of provider). Result: the provider node was considered closed and iamRoleStatments were lost.

No warning on deployment, no error while testing locally. Once deployed all the lambdas were failing due to missing credentials to access the s3 buckets.

my lambdas are failing i don't know why

The first thing I do when something goes wrong without apparent reason - or when sls complains about formatting, is checking the overall indentation of the configuration.
Unfortunately I still haven´t found a proper nice plugin for YML for our IDEs ( Intellij IDEA or Visual Studio Code ) and the online tools like

are far from perfect. They all give slightly different results and cannot be used straight away to format our config, but both show clearly if your yml is broken or lacks in some formatting conventions - for example with strings or arrays.
I normally use these tools to double check and do some clean up in the file.
In Intellij IDEA there is anyway a nice feature that allows you to see the general Structure of your file. If the indentation is broken or something is wrongly nested you will immediately recognize it.

To check the Structure

  • open your yml file
  • press CMD 7 or select View - Tools Window - Structure

Alt Text

Something else that really helps figuring out what else could be wrong is running sls print: that will resolve all variable and reference you are using in the yml and show you the final configuration that will be deployed to cloud formation.

Once the indentation issue was solved and the old permissions were working again as before, the new policy regarding the S3 bucket was still not applied.

In that case, what I normally do is fiddling directly within the AWS Web Console UI.

  • Go to Web Console,
  • Select your Lambda function
  • Scroll to the ExecutionRole panel
  • Check the role that you defined in the serverless yml.

Execution Role - console ui

  • Click on that role to be redirected to the Role Summary in the Identity and Access Management (IAM) UIConsole.
  • See all the policies that are attached to your Lambda , basically the different permissions that are given to it
  • Click on your policy and view it as JSON

IAM policies - console ui

From there you can always play around and edit the policy directly and then try to rerun your lambda to immediately check the effects of your changes, and only afterward going back to editing your serverless.yml and redeploy the entire stack.

Remember - always try to tighten the feedback loop!

"Statement": [
       {
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3::::OUR_BUCKET_NAME:/*"
            ],
            "Effect": "Allow"
        }]
Enter fullscreen mode Exit fullscreen mode

As soon as I did that I realized there was something wrong with the colons...

too many colons

Why??? Having a closer look at the Fn::Join: function I realized it is nothing else that a concatenation method like js Array.join: an array of strings is passed and a delimiter is used to do the concatenation.

Normally you pass a column to separate type-of-resource:region:accountId but this is not necessary for S3 buckets.

The fact that you can write that method in different ways normally does not help to have a clear idea of how to setup your yml.

Writing this

Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", "your-bucket-name", "/*"]]}
Enter fullscreen mode Exit fullscreen mode

this

Resource:
                    Fn::Join:
                      - ''
                      - - 'arn:aws:s3:::'
                        - 'your-bucket-name'
                        - '/*'
Enter fullscreen mode Exit fullscreen mode

or this:

Resource: !Join [ '', [ "arn:aws:s3:::", "your-bucket-name", "/*" ] ]
Enter fullscreen mode Exit fullscreen mode

is absolutely equivalent.

And if you take a look at the IAM Roles attached to your Lambda in the UI console what you wrote becomes:

"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
Enter fullscreen mode Exit fullscreen mode

or

"Resource": ["arn:aws:s3:::YOUR_BUCKET_NAME/*"]
Enter fullscreen mode Exit fullscreen mode

if there are more then one.

By the way. You don't need to use Fn::Join to declare your resource. You could as well specify the ARN directly - (this is what in the end you will find in the IAM policy.

            "Resource": [
                "arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/*"
            ]
Enter fullscreen mode Exit fullscreen mode

For S3 this is not the case, but for many other resources - like for example the Parameter Store you need to specify the region and your accountID - especially if you have multiple accounts or your projects are split among different regions it might be tedious and error-prone to specify all those values manually. Using variable and references might be harder to read, but in the long run, simplify the configuration.

      Resource:
        - Fn::Join:
          - ":"
          - - "arn:aws:ssm"
            - ${self:provider.region}
            - Ref: AWS::AccountId
            - "parameter/*"
Enter fullscreen mode Exit fullscreen mode

In general is always a good idea to avoid hardcoding values and configurations, and use env variables or references instead.

So. To recap:

  • Check the indentation and formatting of the yml with some IDE plugin or external tool
  • Run sls print to view the final configuration with variables and references resolved
  • Double check in the web console the configuration and play around with it directly there to speed up the feedback loop

If you are interested in what we were trying to achieve with those custom variables and permission here is the full code and explanation.

Dynamically create a bucket for each deployment stage and grant the relative Read/Write permission to some Lambdas acting on it.

First declare a couple of custom variables to determine dynamically the name of the bucket based on the deployment stage and define a fixed name for the Bucket Resource to be used throughout the configuration

custom:
  bucket: SERVICE_NAME-${self:provider.stage}-events-repository
  bucketRef: S3BucketRepository
Enter fullscreen mode Exit fullscreen mode

Then use the bucketRef in the Resources description and pass the variable name to it:

resources:
  Resources:

    S3BucketRepository:
      Type: AWS::S3::Bucket
      Properties:
        AccessControl: Private
        BucketName: ${self:custom.bucket}
Enter fullscreen mode Exit fullscreen mode

Finally pass the bucket name as an environment variable to your lambda

functions:
  myLambdaFunction:
    handler: src/index.handler
    name: my-awesome-lambda
    environment:
      BUCKET: ${self:custom.bucket}
Enter fullscreen mode Exit fullscreen mode

But don't forget to add Role Statements so that the lambda can read and write on the S3 Bucket.

 iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
      Resource:
        Fn::Join:
          - ""
          - - "arn:aws:s3:::"
            - Ref: S3BucketRepository
            - "/*"
Enter fullscreen mode Exit fullscreen mode

Photo by Jorge Romero on Unsplash

Top comments (1)

Collapse
 
pushpak0209 profile image
pushpak0209

Hi @dvddpl

You can also use yamlonline.com/ for the yaml validator as well as yaml converter to json,csv,xml,base64 also for beautify and minify YAML.