Terraform Series
- Part 1: Introduction
- Part 2: Creating the Server
- Part 3: Provisioning the Server
- Part 4: Managing Terraform State
- Part 5: Cleaner Code with Terraform Modules
- Part 6: Loops with Terraform
- Part 7: Conditionals with Terraform
- Part 8: Testing Terraform Code
Up until now, we focused on the functionality Terraform provided. Creating servers, adjusting network settings, configuring domain names, running bash scripts remotely, creating storage units, and moving Terraform state to shared space.
In this post, we're going to take a break and see how we can write cleaner, DRYer, and reusable Terraform code. The main keyword here is "reusable" because this post is mostly about Terraform modules.
Terraform Modules
Modules are the key ingredient to writing reusable, maintainable, and testable Terraform code. [1]
Having a module in Terraform easy, because any Terraform code stored under a directory is considered within the same module. For the same reasons, we can also say that modules in Terraform are implicit, and that's why creating a module is a bit of magic.
Let's take a look at our Terraform files we have created so far:
-
provider.tf
: Defines the SSH variables Terraform uses to connect to our droplet. By the way,provider.tf
is a pretty lousy name for this file. :-) We'll refactor it. -
domain.tf
: Points our domain to our droplet and defines its CNAME record. -
space.tf
: Declares our storage unit. It's a bucket in AWS and a space in DigitalOcean lingo. -
main.tf
: Configures pretty much what's left, including the provider version, remote backend, and our beloved droplet.
We created all of these files under the same directory. That's why they all belong to the same Terraform module. Now let's break it down so that we can reuse our code to create multiple droplets that be accessed by different domain names. This way, we can have an infrastructure supporting both production and staging environments. We are going to leave the buckets and remote state files out of this article to make things simple.
What we are aiming for is to create an infrastructure with both production and staging environments. We are going to create them with reusable Terraform modules. After we convert our codebase, our directory structure will look like as follows:
Let me get one thing out of the way first: versions.tf
. I'm using Terraform v0.13 and you need to define your provider requirements starting with this version. So all my versions.tf
files are the same:
terraform {
required_providers {
digitalocean = {
source = "terraform-providers/digitalocean"
}
}
required_version = ">= 0.13"
}
Directory structure with modules
As I mentioned earlier, each directory behaves as a separate module. I decided to divide my infrastructure into two modules: domain
and server
. Each of my environments will have a different server and a different domain pointing to that server. That's why I created a directory for each module under modules
.
Let's start with the domain
module. I copied the code from domain.tf
into modules/domain/main.tf
:
resource "digitalocean_domain" "domain" {
name = var.domain_name
ip_address = var.server_ipv4
}
resource "digitalocean_record" "cname_www" {
domain = digitalocean_domain.domain.name
type = "CNAME"
name = "www"
value = "@"
}
You should notice a change inside the digitalocean_domain
resource: The values of name
and ip_address
are not hard coded anymore.
Module inputs
So I created a vars.tf
file to define my module variables:
variable "domain_name" {
description = "Domain name like yourdomain.com"
type = string
}
variable "server_ipv4" {
description = "Server's IP address where the domain should point to"
type = string
}
Now I'm able to use these variables as inputs in my domain
module. Whenever we use the domain
module, we will have to provide both variables to create this resource. That's how we'll be able to define a domain with different domain names and separate IP addresses pointing to different servers.
Now let's move on to the server
module. Again, I copied my code from my old main.tf
file to here:
resource "digitalocean_droplet" "server" {
image = "ubuntu-20-04-x64"
name = var.server_name
region = "ams3"
size = var.server_size
ssh_keys = [
var.ssh_fingerprint
]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file(var.ssh_private_key)
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install nginx
"sudo apt-get update",
"sudo apt-get -y install nginx"
]
}
}
I created a vars.tf
file for my server
module as well:
variable "server_name" {
description = "The name of the server"
type = string
}
variable "server_size" {
description = "The size of the server"
type = string
}
variable "ssh_fingerprint" {
description = "Fingerprint of the SSH key that is allowed to connect to the server"
type = string
}
variable "ssh_private_key" {
description = "Private key of the SSH key that is allowed to connect to the server"
type = string
}
If we pass the SSH variables, you will see that we are now able to configure the server name and the server size in this module. That's how we'll be able to create a production server with a bigger size while keeping the staging server at a smaller size.
We now have both the domain
and server
modules to create ourselves in an environment. Let's start with production
. Here is what it looks like to create an environment with our modules at production/main.tf
:
module "server" {
source = "../modules/server"
server_name = "terraform-sandbox"
server_size = "s-1vcpu-1gb"
ssh_fingerprint = var.ssh_fingerprint
ssh_private_key = var.ssh_private_key
}
module "domain" {
source = "../modules/domain"
domain_name = "productiondomain.com"
server_ipv4 = module.server.server_ipv4
}
I'm going to pass the SSH variables here as we're going to provide them as command-line arguments. As you can see, we declare the name and size for the server
module. Similarly, we give the name for our domain
module. All hardcoded now. However, server_ipv4
argument for the domain
module looks a bit strange, isn't it? :-)
Module outputs
What you see as module.server.server_ipv4
is the usage of module outputs. We have isolated our domain
and server
modules, but the domain
configuration requires the server
's IP address. We can access a module's values by defining an output in that module. Here is the content of modules/server/outs.tf
:
output "server_ipv4" {
value = digitalocean_droplet.server.ipv4_address
}
By defining the server_ipv4
output, we grant the module user access to the digitalocean_droplet
resource's ipv4_address
argument within the server
instance. Similarly, we access the output by following this structure:
module.MODULE_NAME.OUTPUT_NAME
In our case, this becomes:
module.server.server_ipv4
This way, Terraform will create the server first, and then use its IP address to configure our domain.
Module locals
Apart from inputs and outputs, Terraform provides another data structure to make our codebase DRY. Instead of inputs and outputs, we don't use locals across modules. Their usage is limited to encapsulate local values, much like the constants in programming languages, but only within the same module.
For example, instead of hardcoding our server image and region, let's encapsulate them within modules/server/vars.tf
:
locals {
server_image = "ubuntu-20-04-x64"
server_region = "ams3"
}
Then, we can go ahead and use them in our modules/server/main.tf
file:
resource "digitalocean_droplet" "server" {
image = local.server_image
name = var.server_name
region = local.server_region
size = var.server_size
ssh_keys = [
var.ssh_fingerprint
]
# ...
A note on inputs, outputs, and locals
I just wanted to take a small note here and say that all three structures are not unique to modules. As you can imagine, modules are implicit structures. In a theoretical sense, it would be true if we say that we can also use inputs, outputs, and locals outside of the context of the module. Although each of the usages will practically fall under the module usage since each directory in Terraform is a module.
[1]: Terraform Up & Running: Writing Infrastructure as Code by Yevgeniy Brikman (2nd edition)
Cover photo by Andrej Lišakov
Part 4.........................................................................................................Part 6
Top comments (0)