DEV Community

Cover image for Designing a Kubernetes Cluster in GCP using OSS terraform modules
Michael Mekuleyi
Michael Mekuleyi

Posted on

Designing a Kubernetes Cluster in GCP using OSS terraform modules

Introduction

In this article, we will be leveraging Terraform modules in the Google Foundation toolkit to deploy a private compute network and a Kubernetes cluster. The Google Foundation took-kit is a set of tools, modules, and packages that follow Google's best practices for deploying and maintaining architecture on the Google Cloud Platform. First, we will deploy a private compute network, and then we will go ahead to deploy a Kubernetes Cluster in the same network using only open-source modules. This article requires that you have a working knowledge of Terraform and you are at least conversant with the Google Cloud Platform.

Project structure

The entire project is uploaded in this GitHub repository. The folder structure used in the project is defined below,

  • auth.tf : This file contains code for creating the service account and adding the necessary IAM permissions to those service accounts.
  • gcloud.sh: This file contains commands that are run to enable container and compute services on Google cloud.
  • gke.tf: This files contains code on deploying the Google Kubernetes Engine using the open source module from the Google Foundation Tool-kit.
  • outputs.tf: This file contains code on the output of the configuration.
  • provider.tf: This file contains code that initializes the entire configuration
  • terraform.tfvars: This file contains code that sets values for the variables declared in variables.tf
  • variables.tf: This file contains variable declarations that will be used in the comfiguration
  • vpc.tf: This file contains code on deploying the Private compute network in Google Cloud using the open-source foundation kit

Authentication

I genuinely consider this section extremely important if not most important, because authenticating properly in Google Cloud can be frustrating if not done properly. First, head over to the official documentation to read on how to authenticate with Google Cloud properly (https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started) . Ensure you have your project properly configured with the appropriate permissions, take note to ensure the service account you are using has the Service Acount Token Creator as we will be creating service accounts on the fly.

For the next step head over to gcloud.sh and run the script to enable both compute and container APIs on your Google Cloud Account.



gcloud services enable container.googleapis.com
gcloud services enable compute.googleapis.com


Enter fullscreen mode Exit fullscreen mode

You can choose to run these commands individually or run the file as a script.

Next head over to auth.tf to see how we create the IAM role and enable the necessary permissions.



resource "google_project_service" "this" {
  for_each           = toset(var.services)
  service            = "${each.key}.googleapis.com"
  disable_on_destroy = false
}

resource "google_service_account" "this" {
  account_id   = var.service_account.name
  display_name = "${var.service_account.name} Service Account"
}

resource "google_project_iam_member" "this" {
  project = var.project_id
  count   = length(var.service_account.roles)
  role    = "roles/${var.service_account.roles[count.index]}"
  member  = "serviceAccount:${google_service_account.this.email}"
}


Enter fullscreen mode Exit fullscreen mode

Here we define a list of services to enable, and we also create a service account and an IAM member for each service account role to enable the service account be properly authorised to deploy our compute network and cluster. You can view the list of services enabled in terraform.tfvars.



services = [
  "cloudresourcemanager",
  "compute",
  "iam",
  "servicenetworking",
  "container"
]


Enter fullscreen mode Exit fullscreen mode

You don't need to run any configuration here as everything will be created when we run our terraform configuration.

Deploying the Compute network

To deploy the compute network, head over to vpc.tf, here we define the module for the compute network, the version to be used, and also the compute project IAM before deploying the compute network. We also deploy just one subnet, with secondary ranges. And finally, we enable Identity aware proxy to have SSH access over port 22 to our cluster.



module "vpc" {
  source  = "terraform-google-modules/network/google"
  version = "5.2.0"

  depends_on = [google_project_service.this["compute"]]

  project_id   = var.project_id
  network_name = var.network.name

  subnets = [
    {
      subnet_name           = var.network.subnetwork_name
      subnet_ip             = var.network.nodes_cidr_range
      subnet_region         = var.region
      subnet_private_access = "true"
    },
  ]

  secondary_ranges = {
    (var.network.subnetwork_name) = [
      {
        range_name    = "${var.network.subnetwork_name}-pods"
        ip_cidr_range = var.network.pods_cidr_range
      },
      {
        range_name    = "${var.network.subnetwork_name}-services"
        ip_cidr_range = var.network.services_cidr_range
      },
    ]
  }

  firewall_rules = [
    {
      name      = "${var.network.name}-allow-iap-ssh-ingress"
      direction = "INGRESS"
      ranges    = ["35.235.240.0/20"]
      allow = [{
        protocol = "tcp"
        ports    = ["22"]
      }]
    },
  ]
}


Enter fullscreen mode Exit fullscreen mode

Designing the Kubernetes Cluster

Now to the exciting part, head over to gke.tf to see how we deploy the Kubernetes cluster. First, we use the data object to grab the default google client config, then we initialise the Kubernetes provider with a token and the certificate.



data "google_client_config" "default" {
}
provider "kubernetes" {
  host                   = "https://${module.gke.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}


Enter fullscreen mode Exit fullscreen mode

Next, we go over to define the Kubernetes module, we set the network as the VPC from the earlier module, and we grab the subnet from the VPC in the earlier module. We enable horizontal pod scaling and http load balancing . We also over-ride the default node pool with an object of custom node pool machine definition definitions. The rest of the defaults are the Google-advised parameters to get our cluster production ready.



module "gke" {
  source  = "terraform-google-modules/kubernetes-engine/google"
  version = "23.3.0"
  project_id = var.project_id
  region     = var.region
  name     = var.gke.name
  regional = var.gke.regional
  zones    = var.gke.zones
  network           = module.vpc.network_name
  subnetwork        = local.subnetwork_name
  ip_range_pods     = "${local.subnetwork_name}-pods"
  ip_range_services = "${local.subnetwork_name}-services"
  service_account = google_service_account.this.email
  node_pools = [
    {
      name               = var.node_pool.name
      machine_type       = var.node_pool.machine_type
      disk_size_gb       = var.node_pool.disk_size_gb
      spot               = var.node_pool.spot
      initial_node_count = var.node_pool.initial_node_count
      max_count          = var.node_pool.max_count
      disk_type          = "pd-ssd"
    },
  ]
  # Fixed values
  network_policy             = true
  horizontal_pod_autoscaling = true
  http_load_balancing        = true
  create_service_account     = false
  initial_node_count       = 1
  remove_default_node_pool = true
}


Enter fullscreen mode Exit fullscreen mode

Deploying the services

This is definitely my favorite part. Before you deploy the service, ensure to configure your project id in terraform.tfvars



project_id = "<PROJECT_ID>"


Enter fullscreen mode Exit fullscreen mode

To deploy, first initialize the entire configuration by running terraform init,



michael@monarene:~$ terraform init


Enter fullscreen mode Exit fullscreen mode

Next, we view the intended configuration to be deployed, to do that we run terraform plan on the configuration,



michael@monarene:~$ terraform plan


Enter fullscreen mode Exit fullscreen mode

Terraform Plan on Kubernetes Configuration

Next we deploy the configuration by running the following,



michael@monarene:~$ terraform apply -var-file=terraform.tfvars --auto-approve


Enter fullscreen mode Exit fullscreen mode

If the deployment is successful and everything goes well log in to the console to verify the deployment. First, we check Kubernetes Engine to verify that our cluster is properly deployed.

Kubernetes Deployed Cluster

Next, we check the VPC Network tab to verify that the Compute network is deployed correctly.

Image description

Lastly, we check our node pool to verify that the instances were indeed created,

Node Pool Created in GCP

Yayyyy! Our deployment is successful and we have a working Kubernetes Cluster in a Private Compute Network.

Lastly, please ensure to delete all the resources created by running the following command,



michael@monarene:~$ terraform destroy --auto-approve


Enter fullscreen mode Exit fullscreen mode

*Please do this so as not to attract additional charges on the deployment. *

Conclusion

Thank you for following me on the journey to making this deployment, feel free to extend this deployment and even raise a PR against the main repository (https://github.com/Monarene/deploy-gcp-k8s-modules). If you enjoyed this article, feel free to share it and also star the repository. Thank you!

References

Top comments (2)

Collapse
 
dirisujesse profile image
Dirisu Jesse

Solid

Collapse
 
monarene profile image
Michael Mekuleyi

Thank you Jesse