DEV Community

Cover image for Managing secrets with SSM
Tatiana Barrios
Tatiana Barrios

Posted on • Edited on

Managing secrets with SSM

There is an obvious rule we all know as software engineers: Don't let your sensitive data go into the repository. We have heard about those horror stories where an ill-intentioned developer goes through a repository, gets the credentials for production databases, and erases everything important there. Therefore, the correct management of secrets and sensitive variables is a crucial concern if we want to improve the security of our projects.

This is done on frontend and backend repositories by avoiding anything sensitive on the .env files. Besides, we can share a development or playground version of those variables via a secured s3 bucket, for example. But it gets more complicated than that when we are developing infrastructure as code because it's not enough with a simple .env file, therefore we need to get the data we need from somewhere else.

There are several ways to achieve this. The Vault by Hashicorp or GCP Secret Manager, for instance, are popular tools if you are working with tools from the Google Cloud platform or if you want to instantiate a machine to allocate the Vault server. However, if you are into AWS, there is another very simple resource for these purposes: AWS Systems Manager Parameter Store

The Parameter store behaves almost like an API and as I said, it's very easy to use. It is not the only resource from AWS destined to manage sensitive data, but it's the one I prefer due to its simplicity. There you can store parameters as simple strings (String) or encrypted strings (SecureString), to use them on your code. But the strongest feature from the parameter store is that it allows you to tag variables and store them with hierarchy 😎.

On the pricing side, every API interaction, until 10000 interactions, costs $0.05. There is a limit of 10000 standard parameters (4KB per parameter) on each region and 100000 advanced parameters (8KB per parameter), but the latter ones have a base cost of $0.05 per month. In the case of boto3 or the Serverless framework, the parameters are only retrieved when we deploy or execute the scripts, so, unless you have a big quantity of parameters, the final billing for this service should be very low.

How do we use this?

There are a lot of ways to upload a variable to the SSM parameter store, but my preferred one is just using this command from the AWS CLI:

aws ssm put-parameter --cli-input-json '{"Type": "String/SecureString", "Name": "/this/is/hierarchy/NAME_OF_VARIABLE", "Value": "****", "Overwrite": true}' 

To know more about the other options you have for this command, please visit the AWS CLI documentation

Use case #1: the Serverless Framework

Retrieving and applying the SSM parameters differs with every code tool we are using. First of all, in the case of the Serverless framework, a plugin like serverless-stage-manager would be very convenient if we want to divide the environment variables by stage. Afterward, when we declare the variables in the custom field or directly on a function environment field, we just call the SSM parameter like this:

custom:
  stages:
    - development
    - qa
  testVariable:
    development: ${ssm:/development/TEST_VARIABLE}
    qa: ${ssm:/qa/TEST_VARIABLE}

or

functions:
  testFunction:
    ...
    environment:
      TEST_VARIABLE: ${ssm:/testFunction/TEST_VARIABLE}

Use case #2: Boto3 (Python)

If you create the code for your infrastructure in Python, you can use Boto3 on your advantage, by just starting a session for SSM and retrieving the variables (one by one) with a single command:

session = boto3.Session()
ssm = session.client('ssm')

TEST_VARIABLE = ssm.get_parameters(Names=["/stage/TEST_VARIABLE"], WithDecryption=True/False)['Parameters'][0]['Value']

Use case #3: Terraform

Last but not least, in the case of Terraform, you can use a data source to get a variable from SSM and later reference it on another resource. Just like this:

data "aws_ssm_parameter" "test_variable" {
  name = "TEST_VARIABLE"
  with_decryption = true/false
}

resource "aws_lambda_function" "test_lambda" {
  function_name    = "test_lambda"
  ...

  environment {
    variables = {
      TEST_VARIABLE = data.aws_ssm_parameter.test_variable.value
    }
  }
}

In this case, we should be careful with the TF State or the TF Plan file because the data source value might be there on plain sight. Also, in the case of the Serverless framework or anything related to Cloudformation (e.g SAM, Troposphere) we should be sure that the .cf files are being listed on the .gitignore.

I really hope this post has been clear to all of you, and that you can use this for your projects.

Happy week, everyone! 🌻

Top comments (0)