Level 300
In this post the idea of the pipeline as SaaS product is explored and applied in detail. Consider the scenario explained in the previous blogs. First, some main questions could be clear before started:
- What is the reference pipeline for IaC as Code?
There are many sources and references architectures for IaC CI/CD pipeline, books like Patterns and practices for Infrastructure as Code expose patterns and consideration in deep. For other hand, some vendors suggest best practices and workflows according to their services and tools, the Figure 1 depicts the general steps for agnostic CI process for IaC.
Key Points
1- Decoupling the CI from CD.
2- Apply Trunk Base Development.
3- Do Risk management evaluation to apply changes.
4- Reduce Blass radius and apply microstacks for your infrastructure code.
5- Introduce cost and drift detection before applying some changes.
6- Verify custom policies and compliance practices such policy tags, environments values using policy as code.
7- Integrate practices like SBOM and manage vulnerabilities at scale.
8- To ensure that the IaC remains aligned with the constant changes apply Drift detection as an imperative practice.
9- Don’t forget to consider that not all infrastructure is related to containers and orchestration, modern applications and cloud automation involve networking, serverless, multicloud deployments and complex dependencies.
10- Remember that the repository’s structure, pipelines, and platforms represent a structural or your organization, teams, culture, and communication practices. (Conway’s law).
- What are the security and governance concerts based on AWS Security Best practices?
The well Architecture framework Security pillar resume the best practices, many tools like checkov or kics incorporated those practices into them checks and translate de definitions into configuration properties for the infrastructure components, you can also create a custom library with your requirements and apply policy as code into your CI/CD process and prioritize the preventive over detection controls using tools like Open Policy Agent (OPA).
- How can create self service capabilities for the builder’s team?
It’s a best practice to create a service catalog for your development team using service like AWS Service Catalog, custom blueprints in code catalyst or enterprise catalog using open source like backstage, the main point is applying the well engineering practices abstracting your patterns are use cases around the end user (Builders team).
- When is this approach useful?
Implementing internal SaaS capabilities is useful on the condition that you have a platform team and enablement the lean product practices for create, keep, maintain, and release versions for your artifacts and portals, don’t forget your mission, make possible that the builder teams will be centric application and keep in mind that too many variations increase maintenance costs and reduced the opportunity for sharing between teams. For this scenario, remember that you are doing “DevOps for DevOps” and it’s practical *due to the size of builder’s teams *(Fewer than 50 builders).
- How can you manage the Multi environment deployments based on terraform on AWS?
This is a critical point, a best practice is keeping a central account to assume roles to apply the changes into other accounts according to AWS multi account structure (deployment account), for example, you can have an account for development, other for testing, other for staging, and another for production, map each account with a workspace or folder inside your project structure. For this scenario the approach is an account per workspace.
- What are the tools for each critical step?
In addition to the challenge of this blog, there are some tools available to make these tasks for each step-in Figure 1. For automating local development, you must have pre-commit with the base hooks, in the pipeline Kics **will be used for SAST to IaC you can include **Infracost for cost detection, applying native terraform testing framework for tests or terratest and export test results in junix format to visualize with native codebuild reports or sending to a central test ops suite. For drift detection you can use drifctl. If you want to enable automatic pull request review can introduce Atlantis.
Hands On
Its time to create!
The project setup:
According with the first post: DevSecOps with AWS- IaC at scale - Building your own platform - Part 1 the terragrunt code will suffer additional changes.
First, the terragrunt.hcl
file change to pass the optional overwrite.auto.tfvars
as variable file to terraform CLI, this file will be generated into the pipeline using optional_var_files
block.
Second, the an environment variable pipeline
was introduce to assign the profile name for remote state due the dynamic profile name cross local environments and the pipeline.
locals {
common_vars = read_terragrunt_config("${get_parent_terragrunt_dir()}/common/common.hcl")
environment = read_terragrunt_config("${get_parent_terragrunt_dir()}/common/environment.hcl")
}
terraform {
extra_arguments "init_arg" {
commands = ["init"]
arguments = [
"-reconfigure"
]
env_vars = {
TERRAGRUNT_AUTO_INIT = true
}
}
extra_arguments "common_vars" {
commands = get_terraform_commands_that_need_vars()
required_var_files = [
"${get_parent_terragrunt_dir()}/common/common.tfvars"
]
optional_var_files = [
"${get_parent_terragrunt_dir()}/overwrite.auto.tfvars"
]
}
}
remote_state {
backend = "s3"
generate = {
path = "remotebackend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
profile = "false" == local.environment.locals.pipeline ? local.common_vars.locals.backend_profile : "backend_profile"
region = local.common_vars.locals.backend_region
bucket = local.common_vars.locals.backend_bucket_name
key = "${local.common_vars.locals.project_folder}/${local.environment.locals.workspace}/${path_relative_to_include()}/${local.common_vars.locals.backend_key}"
dynamodb_table = local.common_vars.locals.backend_dynamodb_lock
encrypt = local.common_vars.locals.backend_encrypt
}
}
generate = local.common_vars.generate
The CDK project has de common structure for python projects, here there are some key files:
The pipeline steps definitions:
Figure 3. Repository files - Steps Definitions.
Create plan
First create a plan to move through the pipeline steps. You can use assume role block to setup the AWS providers for backend and infrastructure deployments, however, to keep the same code in the pipeline and in IDE the necessary profiles are created with AWS CLI and post build command clean the folder $HOME/.aws. In pre-build command
the workspace
is setup as environment variable and new overwrite.auto.tfvars
file is created to use the previous profiles. Finally, the plan is created and saved in /tmp/all
folder.
version: '0.2'
env:
variables:
pipeline: 'true'
parameter-store: {}
phases:
install:
runtime-versions:
python: '3.12'
commands:
# Get the caller identity
- caller_identity=$(aws sts get-caller-identity --output json)
# Extract the ARN from the response
- account_backend_role=$(echo $caller_identity | jq -r '.Account')
- echo "backend Role $backend_role"
- backend_role=arn:aws:iam::$account_backend_role:role/$backend_role
- # Set the role ARN and the source profile name
# Assume the role and get the temporary credentials
- response=$(aws sts assume-role --role-arn "$backend_role" --role-session-name "$project_name" )
# Extract the necessary values from the response
- access_key_id=$(echo $response | jq -r '.Credentials.AccessKeyId')
- secret_access_key=$(echo $response | jq -r '.Credentials.SecretAccessKey')
- session_token=$(echo $response | jq -r '.Credentials.SessionToken')
# Create the temporary profile in the AWS credentials file
- aws configure set aws_access_key_id "$access_key_id" --profile backend_profile
- aws configure set aws_secret_access_key "$secret_access_key" --profile backend_profile
- aws configure set aws_session_token "$session_token" --profile backend_profile
- echo "Temporary backend profile 'backend_profile' created successfully."
# create deployment profile
- response=$(aws sts assume-role --role-arn arn:aws:iam::$deployment_account:role/$read_role --role-session-name "$project_name" --profile backend_profile )
- access_key_id=$(echo $response | jq -r '.Credentials.AccessKeyId')
- secret_access_key=$(echo $response | jq -r '.Credentials.SecretAccessKey')
- session_token=$(echo $response | jq -r '.Credentials.SessionToken')
# Create the temporary profile in the AWS credentials file
- aws configure set aws_access_key_id "$access_key_id" --profile deployment_profile
- aws configure set aws_secret_access_key "$secret_access_key" --profile deployment_profile
- aws configure set aws_session_token "$session_token" --profile deployment_profile
- aws sts get-caller-identity --output json --profile deployment_profile
- echo "Temporary backend profile 'deployment_profile' created successfully."
- ls -all
run-as: root
pre_build:
commands:
- export TF_VAR_env=$workspace
- printf "profile={\n\"%s\"={\n \"profile\"=\"deployment_profile\" \n \"region\"= \"%s\"\n}\n}" $workspace $deployment_region > overwrite.auto.tfvars
- echo "Creating tfplan in plan folder for scanning"
- ls -R --ignore=venv
build:
commands:
- terragrunt run-all plan -input=false -no-color -lock=false --terragrunt-out-dir /tmp/tfplan --terragrunt-json-out-dir /tmp/tfplan --terragrunt-exclude-dir .
post_build:
commands:
- mv /tmp/tfplan ./tfplan
- rm -rf ~/.aws
reports:
report_group_name:
files:
- /tmp/tfplan/*
file-format: JunitXml
artifacts:
files:
- 'tfplan/**/*'
exclude-paths:
- './venv/**/*'
- '**/.terraform/**/*'
Run SAST
In this step **KICS **es the tool for the demonstration. Just simple docker step to run checks and create reports.
The same approach is applied to use more tools like checkov, tfsec, trivy, terrascan and more.
version: '0.2'
env:
variables:
pipeline: 'true'
parameter-store: {}
phases:
install:
runtime-versions:
python: '3.12'
commands:
- ls -all
run-as: root
pre_build:
commands:
- ls -R
- docker --version
build:
commands:
- echo "Running kics"
- docker run -t -v ./tfplan:/path checkmarx/kics --ci scan -p /path --report-formats "all" --exclude-gitignore --output-path /path --fail-on "critical"
run-as: root
post_build:
commands:
- echo "finished!"
reports:
report_group_name:
files:
- ./tfplan/junit-results.xml
file-format: JunitXml
artifacts:
files:
- '**/*'
exclude-paths:
- './venv/**/*'
- '**/.terraform/**/*'
Deployment step
This deployment step is to apply in dev
environment, get the current tag
and update a parameter store with the value for current infrastructure version.
version: '0.2'
env:
variables:
pipeline: 'true'
phases:
install:
runtime-versions:
python: '3.12'
commands:
- # Get the caller identity
- caller_identity=$(aws sts get-caller-identity --output json)
# Extract the ARN from the response
- account_backend_role=$(echo $caller_identity | jq -r '.Account')
- echo "backend Role $backend_role"
- backend_role=arn:aws:iam::$account_backend_role:role/$backend_role
- # Set the role ARN and the source profile name
# Assume the role and get the temporary credentials
- response=$(aws sts assume-role --role-arn "$backend_role" --role-session-name "$project_name" )
# Extract the necessary values from the response
- access_key_id=$(echo $response | jq -r '.Credentials.AccessKeyId')
- secret_access_key=$(echo $response | jq -r '.Credentials.SecretAccessKey')
- session_token=$(echo $response | jq -r '.Credentials.SessionToken')
# Create the temporary profile in the AWS credentials file
- aws configure set aws_access_key_id "$access_key_id" --profile backend_profile
- aws configure set aws_secret_access_key "$secret_access_key" --profile backend_profile
- aws configure set aws_session_token "$session_token" --profile backend_profile
- echo "Temporary backend profile 'backend_profile' created successfully."
# create deployment profile
- response=$(aws sts assume-role --role-arn arn:aws:iam::$deployment_account:role/$write_role --role-session-name "$project_name" --profile backend_profile )
- access_key_id=$(echo $response | jq -r '.Credentials.AccessKeyId')
- secret_access_key=$(echo $response | jq -r '.Credentials.SecretAccessKey')
- session_token=$(echo $response | jq -r '.Credentials.SessionToken')
# Create the temporary profile in the AWS credentials file
- aws configure set aws_access_key_id "$access_key_id" --profile deployment_profile
- aws configure set aws_secret_access_key "$secret_access_key" --profile deployment_profile
- aws configure set aws_session_token "$session_token" --profile deployment_profile
- aws sts get-caller-identity --output json --profile deployment_profile
- echo "Temporary backend profile 'deployment_profile' created successfully."
- ls -all
pre_build:
commands:
- export TF_VAR_env=$workspace
- printf "profile={\n\"%s\"={\n \"profile\"=\"deployment_profile\" \n \"region\"= \"%s\"\n}\n}" $workspace $deployment_region > overwrite.auto.tfvars
- ls -R --ignore=venv
build:
commands:
- echo 'Deploying ... '
- terragrunt run-all apply --terragrunt-non-interactive --terragrunt-use-partial-parse-config-cache --terragrunt-exclude-dir .
- export current_tag=`git describe --abbrev=0 --tag`
- echo "Updating parameter with new version"
- aws ssm put-parameter --name /$project_name/$workspace/version --type "String" --value $current_tag --overwrite
In the pipeline you can add manual approve for peer review steps and watch the state in Slack Channel or custom notification in Microsoft.
The Pipeline in AWS CodePipeline
Figure 4. Deployment AWS Code tools.
The code!
In the next post the code will be shared, thanks for reading and sharing! ☺️⭐
Top comments (0)