With the Twemproxy docker image created, we can now turn our attention to provisioning the resources required to deploy and run a Memorystore Redis cluster in GCP with a proxy. Using IAC (infrastructure as code) to manage your cloud resources is a great way to enforce consistency across environments and to ensure visibility on all infrastructure.
Terraform is one of the more popular IAC solutions and is what we'll be using to plan out the infrastructure required for a Redis cluster in GCP.
It would take several blog posts to go through every detail of each resource, so we will focus on a few key details for each one. Some dependencies, like the network and subnetwork required for the load balancer or the details regarding the client services that will be making requests to the proxy server, will not be explicitly included in the example code. The assumption will be that these resources are already deployed. This blog post will focus on the additional resources required to add a caching layer.
Memorystore
The first resources we will define are the Memorystore instances.
locals {
reserved_ips = ["10.0.0.24/29", "10.0.0.32/29", "10.0.0.40/29"]
}
resource "google_redis_instance" "redis_memorystore" {
count = length(local.reserved_ips)
provider = google-beta
region = "us-east1"
name = "memorystore${count.index + 1}"
tier = "BASIC"
memory_size_gb = 1
redis_version = "REDIS_5_0"
authorized_network = var.network.id
reserved_ip_range = local.reserved_ips[count.index]
display_name = "Memorystore Redis Cache ${count.index + 1}"
}
For this example we have a list of CIDR ranges, one range for each Memorystore instance. The IP address can be whatever you would like it to be however the IP network prefix must be 29. You can optionally remove the reservered_ip_range
argument and let GCP assign the CIDR ranges at which point you can set count
to an integer value for the number of instances desired.
The Memorystore instances created by this resource block will work well for testing but will likely require a few changes for production environments. For production you should consider changing the tier
to STANDARD_HA
and the memory_size_gb
to a size that satisfies your application's requirements.
Managed Instance Group
Next we will define the resources required to deploy the Twemproxy server as a managed instance group of VMs (virtual machines) running our containerized server. There are quite a few resources needed to deploy a complete managed instance group solution. We'll start with the instance template and managed instance group resources.
resource "google_compute_instance_template" "twemproxy_template" {
region = "us-east1"
name_prefix = "twemproxy-"
machine_type = "f1-micro"
tags = ["fw-allow-lb-and-hc", "fw-allow-twemproxy-6380"]
disk {
source_image = "cos-cloud/cos-stable"
boot = true
}
network_interface {
network = var.network.name
subnetwork = var.subnetwork.name
}
lifecycle {
create_before_destroy = true
}
}
resource "google_compute_instance_group_manager" "twemproxy_instance_group" {
zone = "us-east1-b"
name = "twemproxy-mig"
base_instance_name = "twemproxy"
target_size = 0
named_port {
name = "proxy"
port = 6380
}
version {
name = "twemproxy-template"
instance_template = google_compute_instance_template.twemproxy_template.self_link
}
lifecycle {
ignore_changes = [
version,
target_size
]
}
}
The image template resource is primarily a placeholder allowing us to provision the infrastructure for the managed instance group independent from the deployment of our code. The template is required for an initial deployment of a managed instance group but will be replaced every time our application code changes and our proxy service is redeployed.
The creation and deployment of the real instance template — one that includes our docker image — will be discussed in the next blog post. When creating a new template as a part of the deployment process, we will need to include the same tags listed on the placeholder. These tags are used to attach the firewall rules defined below to the VM instances created from our instance template.
The managed instance group starts with a target size of 0 instances so that we do not incur unnecessary cost before deploying the proxy server. The instance group also includes lifecycle information that ensures Terraform will ignore changes to the instance template version and number of instances running.
If we do not tell Terraform to ignore these changes, on subsequent runs of terraform apply
it would reduce the number of instances in our managed instance group back to 0 and would use the placeholder template which would override the current template in use.
Load Balancer
Now we need to create an internal load balancer for the managed instance group. Internal load balancers are made up of three resources, a forwarding rule, a backend service, and a health check. The forwarding rule defines the client facing properties of the load balancer including the IP address and the available ports. The backend service provides a health check for the instances, defines the way traffic is distributed to the instances by the load balancer, and also contains a reference to the managed instance group resource.
resource "google_compute_forwarding_rule" "twemproxy_internal_lb" {
provider = google-beta
region = "us-east1"
name = "twemproxy-internal-lb"
ip_protocol = "TCP"
load_balancing_scheme = "INTERNAL"
backend_service = google_compute_region_backend_service.twemproxy_backend_service.self_link
ip_address = "10.0.0.50"
ports = ["6380"]
network = var.network.name
subnetwork = var.subnetwork.name
}
resource "google_compute_region_backend_service" "twemproxy_backend_service" {
name = "twemproxy-backend-service"
region = "us-east1"
health_checks = [google_compute_health_check.twemproxy_health_check.self_link]
backend {
group = google_compute_instance_group_manager.twemproxy_instance_group.instance_group
balancing_mode = "CONNECTION"
}
}
resource "google_compute_health_check" "twemproxy_health_check" {
name = "twemproxy-health-check"
timeout_sec = 1
check_interval_sec = 1
tcp_health_check {
port = 6380
}
}
Firewall Rules
The last resources we'll need to define are a set of firewall rules.
resource "google_compute_firewall" "allow_lb_and_hc" {
name = "fw-allow-lb-and-hc"
network = var.network.name
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = ["fw-allow-lb-and-hc"]
allow {
protocol = "tcp"
}
}
resource "google_compute_firewall" "allow_twemproxy" {
name = "fw-allow-twemproxy-6380"
network = var.network.name
source_ranges = ["0.0.0.0/0"]
target_tags = ["fw-allow-twemproxy-6380"]
allow {
protocol = "tcp"
ports = ["6380"]
}
}
The first firewall rule defined above allows both the load balancer and the health check to communicate with the VMs in your managed instance group. The IP ranges assigned to the source_ranges
argument are provided by Google in their documentation for setting up a load balancer and need to be exactly as defined in the example code.
The second firewall rule will allow other resources in your network to communicate with the Twemproxy server. You will need to change the source_ranges
attribute to match the IP addresses of the applications that use the proxy. In addition to IP ranges and tags, there are other options for specifying the source and target resources of this firewall rule. You can find information on these options and their restrictions here.
With all of the required resources defined, we can now run terraform validate
and terraform plan
to make sure everything was done correctly and to see the final list of resources Terraform will provision for our given plan.
Once everything is correct we can run terraform apply -auto-approve
. Terraform will begin to provision all of our resources and let us know if the deployment was a success or if there are any issues to resolve.
In the next blog post we will deploy the Twemproxy server to our newly provisioned cloud resources as well as make a few changes to the code of our client services so that they are able to use the proxy server instead of directly communicating with Memorystore.
Top comments (0)