DEV Community

Victor Okonkwo
Victor Okonkwo

Posted on

Deploying Configurable and Clustered Web Servers Using Terraform

In this post, I’ll walk you through deploying a configurable web server and a clustered web server using Terraform. We'll explore the key differences between these two approaches and demonstrate how Terraform simplifies infrastructure as code deployment.

What is Terraform?

Terraform is an open-source infrastructure as code (IaC) tool that allows you to define and provision infrastructure using configuration files. By leveraging a declarative approach, Terraform enables you to manage and automate the provisioning of resources across multiple cloud providers, such as AWS, Azure, and Google Cloud.

For this tutorial, we will focus on AWS as our cloud provider, where we’ll deploy EC2 instances for both configurations. The key difference between the two is in their scalability and architecture. Let’s break it down into two parts:

Part 1: Deploying a Configurable Web Server

A configurable web server is an EC2 instance that you can configure with parameters like instance type, region, and key pair. This type of setup is great for simple, low-traffic applications or testing purposes.

Steps to Deploy a Configurable Web Server:

  1. Set up the environment: Ensure that you have Terraform installed and your AWS credentials configured. You can use the AWS CLI for this:
   aws configure
Enter fullscreen mode Exit fullscreen mode
  1. Define the Terraform Configuration: Create a main.tf file that specifies the resources we need, such as:
    • EC2 instance: A web server that will serve content.
    • Security group: To allow HTTP traffic on port 80.

Here’s an example of what the main.tf file might look like:

   provider "aws" {
     region = "us-east-1"  # Set the AWS region
   }

   variable "instance_type" {
     default = "t3.micro"
   }

   variable "ami_id" {
     default = "ami-075449515af5df0d1"  # Ubuntu 20.04 AMI
   }

   variable "key_pair_name" {
     default = "bincom"
   }

   resource "aws_security_group" "allow_http" {
     name_prefix = "allow_http"
     ingress {
       from_port   = 80
       to_port     = 80
       protocol    = "tcp"
       cidr_blocks = ["0.0.0.0/0"]
     }
     egress {
       from_port   = 0
       to_port     = 0
       protocol    = "-1"
       cidr_blocks = ["0.0.0.0/0"]
     }
   }

   resource "aws_instance" "web" {
     ami           = var.ami_id
     instance_type = var.instance_type
     key_name      = var.key_pair_name
     security_groups = [aws_security_group.allow_http.name]

     tags = {
       Name = "ConfigurableWebServer"
     }
   }

   output "web_server_public_ip" {
     value = aws_instance.web.public_ip
   }
Enter fullscreen mode Exit fullscreen mode
  1. Run Terraform Commands:

    • Initialize Terraform:
     terraform init
    
  • Plan the Deployment:

     terraform plan
    
  • Apply the Configuration:

     terraform apply
    

Terraform will provision the web server with the defined configurations, and after the deployment is complete, it will output the public IP of the EC2 instance, which you can use to access your web server.

Key Takeaways:

  • Customizable Parameters: You can easily change parameters like instance type, region, and AMI ID.
  • Single EC2 Instance: This configuration creates only one instance, suitable for low-traffic applications.

Part 2: Deploying a Clustered Web Server

A clustered web server setup involves multiple EC2 instances running behind a load balancer. This configuration is ideal for scaling applications and ensuring high availability. If one instance fails, the load balancer can route traffic to healthy instances in the cluster, making the setup more resilient.

Steps to Deploy a Clustered Web Server:

  1. Extend the Terraform Configuration: To create a clustered web server, we will:
    • Create an Auto Scaling Group (ASG) to manage multiple EC2 instances.
    • Add an Elastic Load Balancer (ELB) to distribute incoming traffic across the instances in the cluster.

Here’s the updated main.tf configuration for the clustered setup:

   resource "aws_security_group" "allow_http" {
     name_prefix = "allow_http"
     ingress {
       from_port   = 80
       to_port     = 80
       protocol    = "tcp"
       cidr_blocks = ["0.0.0.0/0"]
     }
     egress {
       from_port   = 0
       to_port     = 0
       protocol    = "-1"
       cidr_blocks = ["0.0.0.0/0"]
     }
   }

   resource "aws_launch_configuration" "web" {
     image_id        = var.ami_id
     instance_type   = var.instance_type
     key_name        = var.key_pair_name
     security_groups = [aws_security_group.allow_http.id]

     user_data = <<-EOF
       #!/bin/bash
       sudo apt update
       sudo apt install -y apache2
       sudo systemctl start apache2
     EOF
   }

   resource "aws_autoscaling_group" "web" {
     desired_capacity     = 2
     max_size             = 3
     min_size             = 1
     launch_configuration = aws_launch_configuration.web.id
     vpc_zone_identifier  = [aws_subnet.main.id]
   }

   resource "aws_elb" "web" {
     name               = "web-load-balancer"
     availability_zones = ["us-east-1a", "us-east-1b"]

     listener {
       instance_port     = 80
       instance_protocol = "HTTP"
       lb_port           = 80
       lb_protocol       = "HTTP"
     }

     health_check {
       target              = "HTTP:80/"
       interval            = 30
       timeout             = 5
       healthy_threshold   = 2
       unhealthy_threshold = 2
     }
   }

   output "load_balancer_dns" {
     value = aws_elb.web.dns_name
   }
Enter fullscreen mode Exit fullscreen mode
  1. Run Terraform Commands:

    • Initialize Terraform:
     terraform init
    
  • Plan the Deployment:

     terraform plan
    
  • Apply the Configuration:

     terraform apply
    

Once the Terraform plan is applied, you will have:

  • An Auto Scaling Group that will automatically manage the number of instances in the cluster based on traffic.
  • An Elastic Load Balancer that will distribute traffic across the instances in the Auto Scaling Group.

Key Takeaways:

  • Multiple Instances: The clustered web server configuration spins up multiple EC2 instances, ensuring high availability and fault tolerance.
  • Elastic Load Balancer (ELB): ELB distributes traffic across multiple EC2 instances, providing a balanced load.
  • Auto Scaling: The Auto Scaling Group dynamically adjusts the number of instances based on demand, which ensures scalability.

Conclusion

In this post, we learned how to deploy both a configurable web server and a clustered web server using Terraform. While the configurable web server is a great option for simple or low-traffic websites, the clustered web server provides higher availability and scalability, making it suitable for production-grade applications.

By leveraging Terraform’s Infrastructure as Code (IaC) capabilities, we can automate the deployment of web servers, making the process reproducible and easy to manage across multiple environments.

Happy Terraforming!


Top comments (0)