Introduction
In today's blog, we’ll dive deep into learning and hands-on practice with CloudPosse Atmos. I'm not sure how familiar people are with this technology, but I’m confident that after reading this and visiting Atmos' official documentation, you’ll be intrigued by it. In this blog, I've included some content from the official sources, but I've also added my own insights and projects. I hope you enjoy this technical blog, which I’m presenting in a slightly different format today. It might be a bit longer compared to my previous posts, but I only publish blogs about topics I have hands-on experience with. I hope you find it valuable—let’s get started!
-1-. What is SweetOps..?
SweetOps is a methodology for building modern, secure infrastructure on top of Amazon Web Services (AWS). It provides a toolset, library of reusable Infrastructure as Code (IaC), and opinionated patterns to help you bootstrap robust cloud native architectures. Built in an Open Source first fashion by Cloud Posse, it is utilized by many high performing startups to ensure their cloud infrastructure is an advantage instead of a liability. In short, SweetOps makes working in the DevOps world Sweet!
Who is this for?
SweetOps is for DevOps or platform engineering teams that want an opinionated way to build software platforms in the cloud. If the below sounds like you, then SweetOps is what you're looking for:
-1-. You're on AWS
-2-. You're using Terraform as your IaC tool
-3-. Your platform needs to be secure and potentially requires passing compliance audits (PCI, SOC2, HIPAA, HITRUST, FedRAMP, etc.)
-4-. You don't want to reinvent the wheel
With SweetOps you can implement the following complex architectural patterns with ease:
-1-. An AWS multi-account Landing Zone built on strong, well-established principles including Separation of Concerns and Principle of Least Privilege (POLP).
-2-. Multi-region, globally available application environments with disaster recovery capabilities.
-3-. Foundational AWS-focused security practices that make complex compliance audits a breeze.
-4-. Microservice architectures that are ready for massive scale running on Docker and Kubernetes.
-5-. Reusable service catalogs and components to promote reuse across an organization and accelerate adoption
-2-. What is Atmos and its uses
Cloudposse Atmos is an open-source tool for managing and orchestrating infrastructure and applications on cloud providers like AWS, GCP, and Azure. It's designed to simplify and automate the process of provisioning, deploying, and managing infrastructure and applications in a cloud-agnostic way.
Atmos provides a declarative configuration language that allows you to define your infrastructure and applications in a human-readable format, and then automates the provisioning and deployment process using Terraform and other tools.
Some key features of Cloudposse Atmos include:
- Declarative configuration language
- Cloud-agnostic architecture
- Support for multiple cloud providers (AWS, GCP, Azure)
- Integration with Terraform and other tools
- Automated provisioning and deployment
- Support for microservices and containerized applications
Cloudposse Atmos is often used by DevOps teams and cloud engineers to streamline their infrastructure and application management workflows, and to promote infrastructure-as-code (IaC) practices.
-3-. Launch a t2.micro EC2 instance, update the machine, and follow the commands below to install Atmos
# Update and install utilities
sudo apt-get update && sudo apt-get install -y apt-utils curl
# Add the Cloud Posse repository
curl -1sLf 'https://dl.cloudsmith.io/public/cloudposse/packages/cfg/setup/bash.deb.sh' | sudo bash
# Update package lists
sudo apt-get update
# Install atmos version 1.84.0
sudo apt-get install -y atmos=1.84.0-1
# Verify installation
atmos version
Also, make sure you run the command to clone the CloudPosse repo in a single line. We have successfully installed Atmos on our system.
-4-. After that, we need to create components, stacks, and some files. Make sure you follow the folder structure below, as it must be organized this way. Before creating the structure, create a main folder, and then set up the following structure within it for better understanding.
-5-. Now, we will start writing the Terraform files within that folder structure. Let’s get started!
-6-. First, add the following code to the atmos.yaml
file. We need to configure atmos.yaml
for our project.
base_path: "./"
components:
terraform:
base_path: "components/terraform"
apply_auto_approve: false
deploy_run_init: true
init_run_reconfigure: true
auto_generate_backend_file: false
stacks:
base_path: "stacks"
included_paths:
- "deploy/**/*"
excluded_paths:
- "**/_defaults.yaml"
name_pattern: "{stage}"
logs:
file: "/dev/stderr"
level: Info
To configure Atmos for your project, we’ll create a file called atmos.yaml
to specify where Atmos can find the Terraform components and Atmos stacks. This file allows you to configure almost everything in Atmos.
-7-. After that, navigate to the myapp
component to write the Terraform code/module. First, create a variables.tf
file and add the following code to it.
you’ll see that everything is just plain Terraform (HCL) with nothing specific to Atmos. That’s intentional: we want to demonstrate that Atmos works seamlessly with plain Terraform. Atmos introduces conventions around how you use Terraform with its framework, which will become more evident in the subsequent lessons.
variable.tf
variable "stage" {
description = "Stage where it will be deployed"
type = string
}
variable "location" {
description = "Location for which the weather."
type = string
default = "Los Angeles"
}
variable "options" {
description = "Options to customize the output."
type = string
default = "0T"
}
variable "format" {
description = "Format of the output."
type = string
default = "v2"
}
variable "lang" {
description = "Language in which the weather is displayed."
type = string
default = "en"
}
variable "units" {
description = "Units in which the weather is displayed."
type = string
default = "m"
}
To make the best use of Atmos, ensure your root modules are highly reusable by accepting parameters, allowing them to be deployed multiple times without conflicts. This also usually means provisioning resources with unique names.
main.tf
The main.tf
file is where the main implementation of your component resides. This is where you define all the business logic for what you're trying to achieve-the core functionality of your root module. If this file becomes too large or complex, you can break it into multiple files in a way that makes sense. However, sometimes that is also a red flag, indicating that the component is trying to do too much and should be broken down into smaller components.
In this example, we define a local variable that creates a URL using the variable inputs we receive. We also set up a data source to perform an HTTP request to that endpoint and retrieve the current weather. Additionally, we write this output to a file to demonstrate a stateful resource
locals {
url = format("https://wttr.in/%v?%v&format=%v&lang=%v&u=%v",
urlencode(var.location),
urlencode(var.options),
urlencode(var.format),
urlencode(var.lang),
urlencode(var.units),
)
}
data "http" "weather" {
url = local.url
request_headers = {
User-Agent = "curl"
}
}
# Now write this to a file (as an example of a resource)
resource "local_file" "cache" {
filename = "cache.${var.stage}.txt"
content = data.http.weather.response_body
}
version.tf
The versions.tf
file is where provider pinning is typically defined. Provider pinning increases the stability of your components and ensures consistency between deployments in multiple environments.
terraform {
required_version = ">= 1.0.0" #you can use latest version as well
required_providers {}
}
output.tf
The outputs.tf
file is where, by convention in Terraform, you define any outputs you want to expose from your root module. Outputs are crucial for passing state between root modules and can be used with remote state or the Atmos function to retrieve the state of other components. In object-oriented terms, think of outputs as the 'public' attributes of the module, intended to be accessed by other modules. This convention helps maintain clarity and organization within your Terraform configurations.
output "weather" {
value = data.http.weather.response_body
}
output "url" {
value = local.url
}
output "stage" {
value = var.stage
description = "Stage where it was deployed"
}
output "location" {
value = var.location
description = "Location of the weather report."
}
output "lang" {
value = var.lang
description = "Language which the weather is displayed."
}
output "units" {
value = var.units
description = "Units the weather is displayed."
}
The sequence of defining these files does not matter
-8-. Now we are configuring our stack for deployment
components:
terraform:
<name-of-component>:
To specify which component to use, set the metadata.component
property to the path of the component's directory, relative to the components.base_path
defined in the atmos.yaml
. In our case, the components.base_path
is components/terraform
, so you can simply specify weather
as the path
components:
terraform:
station:
metadata:
component: weather
-9-. Next, go to /stacks/catalog
and create a file named station.yaml
.
components:
terraform:
station:
metadata:
component: weather
vars:
location: Los Angeles
lang: en
format: ''
options: '0'
units: m
-10-. Next, we’ll define the environment-specific configurations for our Terraform root module. We’ll create a separate file for each environment and stage. In our case, we have three environments: dev
, staging
, and prod
When Atmos processes this stack configuration, it will first import and deep-merge all the variables from the imported files, then apply the inline configuration. While the order of keys in a YAML map doesn’t affect behavior, lists are strictly ordered, so the sequence of imports
is important
-11-. Define the configuration for the dev
environment.
In the dev
stack configuration, Atmos first processes the imports
in the order defined. It then applies the global vars
specified in the top-level section. Include only those vars
in the globals that are applicable to every single component in the stack. For variables that aren't universally applicable, define them on a per-component basis.
For example, by setting var.stage
to dev
at a global level, we assume that every component in this stack will have a stage variable.
Finally, in the component-specific configuration for the station
, set the fine-tuned parameters for this environment. Everything else will be inherited from its baseline configuration. There are no strict rules about where to place configurations; organize them in a way that makes logical sense for your infrastructure’s data model.
To accomplish this, go to /stacks/deploy
and create a file named dev.yaml
.
vars:
stage: dev
import:
- catalog/station
components:
terraform:
myapp:
vars:
location: India
lang: en
-12-. In this demo, we will focus on the dev
environment only. If you want to create configurations for stage
and prod
, you can refer to the official Atmos documentation for guidance.
-13-. After completing all the steps, the final file and folder structure should look like this:
-14-. Now that we have written all the modules in Terraform, we still need to install Terraform itself. Let’s proceed with the installation.
-1-. sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
-2-. wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
-3-. gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
-4-. echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
-5-. sudo apt update
-6-. sudo apt-get install terraform
-7-. terraform version
-15-. Now, let’s deploy the module using the command below. Make sure to run this command from the root folder where the atmos.yaml
file is located
atmos terraform apply myapp -s dev
-16-. Here, you can view either a graph or a report for the country India. Additionally, if you need to run the init
or plan
commands, you can execute them as well.
-17-. Atmos can change how you think about the Terraform code you write to build your infrastructure.
When you design cloud architectures with Atmos, you will first break them apart into pieces called components. Then, you will implement Terraform "root modules" for each of those components. Finally, compose your components in any way you like using stacks, without the need to write any code or messy templates for code generation.
Conclusion
In this blog, we’ve walked through the process of setting up and configuring Atmos with Terraform. We started by creating the necessary directory structure and defining the atmos.yaml
configuration file. We then wrote the Terraform code for our components and environment-specific configurations.
We covered the steps to install Terraform and deploy the module, ensuring that everything was executed from the correct directory. By the end of this guide, you should have a solid understanding of how Atmos integrates with Terraform and how to manage infrastructure using this framework.
Feel free to explore Atmos further by consulting the official documentation for additional environments like staging
and production
. With this setup, you’re well-equipped to manage and scale your infrastructure effectively.
Thank you for following along, and I hope you found this tutorial helpful!
Top comments (0)