This post is the second in a series of three on deploying elixir:
- Building Releases with Docker & Mix
- Terraforming an AWS EC2 Instance
- Deploying Releases with Ansible
In Pt. 1 we built our production release, locally, with Docker. Now that we have a release, we need somewhere to run it. Let's walk through the basics of Terraform, by HashiCorp. By using Terraform, we can write our infrastructure as code, albeit a very simple infrastructure for our webhook processor app.
If you did not follow along with that last post, you can grab the complete code here: https://github.com/jonlunsford/webhook_processor
Terraform is:
A workflow to provision infrastructure for private cloud, public cloud, and external services. Build reusable Terraform templates to define the topology of infrastructure using code.
The goals for this post are:
- Install Terraform
- Configure Terraform for AWS
- Create an EC2 instance
Installing Terraform
If you're using homebrew on OSX, you can run:
$ brew install terraform
Otherwise, you can grab a binary for your system here. Verify the install worked properly by opening a new shell and typing:
$ terraform
You should see some usage output. If you see an error, it's possible the binary is not in your $PATH
, ensure that the directory Terraform was installed in, is part of your $PATH
.
Configuring Terraform for AWS
With Terraform installed, we can begin building infrastructure on AWS. You'll need an AWS account, grab a free one here if you don't already have one. Next you'll need valid credentials, which can be found in the IAM Management Console.
Finally, the last prerequisite is having an ssh key pair ready to supply, so we can eventually connect to our instance, both manually and through Ansible, outlined in the next post. Let's generate a new key to be used with our instance:
ssh-keygen -f ./rel/webhook_processor_key
Configuration is composed in *.tf
files, this is how we will describe the infrastructure we want. The following is the entire config file to build an EC2 instance with a new key pair and security group allowing http+ssh access, save this file at ./rel/terraform/webhook_processor.tf
:
# ./rel/terraform/webhook_processor.tf
provider "aws" {
region = "us-west-1" # Set the region available to you
access_key = ACCESS_KEY_HERE
secret_key = SECRET_KEY_HERE
}
# A security group is required to configure allowed traffic on the instance
resource "aws_security_group" "webhook_processor_sg" {
name = "allow-all-sg"
description = "Allow all inbound ssh+http traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# A key pair is required to ultimately allow SSH access
resource "aws_key_pair" "webhook_processor_kp" {
key_name = "webhook_processor_deploy"
public_key = file("../webhook_processor_key.pub") # Read from the file we generated
}
# Finally, create the ec2 instance using the above references
resource "aws_instance" "webhook_processor" {
ami = "ami-0e4ae7403dc481431" # Debian Buster AMI
instance_type = "t2.micro" # Should be eligible for free
key_name = aws_key_pair.webhook_processor_kp.key_name
security_groups = [aws_security_group.webhook_processor_sg.name]
}
output "webhook_processor_host" {
value = aws_instance.webhook_processor.public_dns
}
Replace ACCESS_KEY_HERE
and SECRET_KEY_HERE
with your own. You can hard code them this way for now or place your credentials in ~/.aws/credentials
, that file should look like:
[default]
aws_access_key_id = ACCESS_KEY_HERE
aws_secret_access_key = SECRET_KEY_HERE
If the access_key
and secret_key
are omitted from the provider block, Terraform will automatically look for the above file, in which case it will look like:
provider "aws" {
region = "us-west-1"
}
NOTE: If you use a region other than us-west-1
the AMI for Debian Buster
will be different, AMI's are region specific. There are a few ways to find an AMI on AWS. Once you've found an AMI, be sure to replace the example above.
Let's go over the .tf
file really quick. provider
blocks are used to configure named providers, aws
in our case. There are a whole bunch of providers available. Terraform files can have many providers configured as well, you can begin to see how you can describe very complex infrastructure topologies.
The resource
block is used to describe a resource that our architecture requires, in this case we will need an aws_security_group
, aws_key_pair
and an aws_instance
.
Finally, output
blocks allow you to retrieve specific attributes of your resources. We will need to know the public_dns
(url) of our instance when we deploy our app.
Initializing Terraform
Now we are ready to confirm our .tf
file is correct. cd
into ./rel/terraform/
and run the following:
$ terraform init
This will prepare Terraform, including downloading any provider
plugins it needs, you should see it downloading the provider.aws
plugin.
Creating Infrastructure
After successful initialization we can now create our infrastructure, from the same directory (./rel/terraform/
) run:
$ terraform apply
You will see an overview of the execution plan along with a prompt to confirm applying the plan, something like:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
...
Notice + create
, this indicates the type of action Terraform will perform. Since our infrastructure does not exist, it assumes we are creating new resources.
Changing Infrastructure
To get a little more comfortable with Terraform, let's change something and see how that is handled. Update the ami for our aws_instance
:
resource "aws_instance" "webhook_processor" {
ami = "ami-29918949" # Different resource
instance_type = "t2.micro" # Should be eligible for free
key_name = aws_key_pair.webhook_processor_kp.key_name
security_groups = [aws_security_group.webhook_processor_sg.name]
}
Again, run:
$ terraform apply
And again, we see an overview of the execution plan with a prompt to confirm:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
...
Notice -/+ destroy and then create replacement
, changing the actual instance ami requires destroying the existing one and creating a new one.
Enter yes
at the prompt to execute the plan.
Destroying Infrastructure
Lastly, let's see how Terraform can destroy resources it manages, run the following:
$ terraform destroy
Here's the overview of that command:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
...
We've now gone through the general workflow with Terraform. Go ahead and recreate an EC2 instance, so we have something to work with in part 3. Finally, let's confirm our ssh access is permitted, after you recreate the ec2 instance, test it out using the webhook_processor_host
value that was printed out after, mine looks like:
ssh admin@ec2-xx-xxx-x-xxx.us-west-1.compute.amazonaws.com -i ../webhook_processor_key "cat ~/.ssh/authorized_keys"
You should see the webhook_processor_deploy
key printed out.
Conclusion
We've seen how to describe our infrastructure topology with Terraform and how to apply that to our providers, AWS in this case. You could just as easily apply this to other providers, DigitalOcean for example.
Up next, we will provision our newly created EC2 instance with Ansible and deploy our app.
As always, the code is available on GitHub: https://github.com/jonlunsford/webhook_processor
Top comments (2)
Hi thanks for putting this together.
I'm encountering an error at 'terraform apply'. I can't seem to make a security group.
This is my aws block
and my ~/.aws/credentials file:
any suggestions? I just generated the keys so I'm pretty sure they should work.
Nevermind! The problem was that I was using an IAM key. When I used a Root Key it worked.. Leaving this here for others.