DEV Community

Cover image for Deploying a Kubernetes Cluster on Azure Kubernetes Service(AKS) with Terraform
Audu Ephraim
Audu Ephraim

Posted on

Deploying a Kubernetes Cluster on Azure Kubernetes Service(AKS) with Terraform

Introduction

Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate deploying, scaling, and operating application containers. It allows you to manage containerized applications across a cluster of machines efficiently

Azure Kubernetes Service(AKS): Is a managed Kubernetes service offered by Microsoft as part of the Azure cloud platform. It provides a way for organizations to deploy and manage their containerized applications at scale, leveraging the powerful features of Kubernetes.

By providing a fully managed service that handles many of the underlying infrastructure and management chores, AKS makes it easier to build and operate Kubernetes clusters.

Because of this, businesses can concentrate on their apps and services rather than worrying about the infrastructure as a whole.

Terraform is an open-source infrastructure-as-code software tool created by HashiCorp. It allows users to define and provision their infrastructure on different cloud platforms and define services using a high-level configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON.

In this article, I will be discussing how I created an AKS cluster on Azure entirely using Terraform, provisioned an NGINX image and set up Prometheus and Grafana for monitoring and alerting.

This article assumes the reader has a basic understanding of Azure and Kubernetes and also has the Azure CLI installed and signed in.

Creating the cluster

To begin, I created a directory named azure-aks to store my Terraform scripts. Then, I created a main.tf file where I will write the Terraform configurations for my resources.

In the main.tf file, I began by defining the providers that I will need for this project.

Terraform Providers are plugins that implement resource types and data sources. They serve as a bridge between Terraform and a service or platform, such as Azure, AWS, Kubernetes, etc.

In my case, I will need providers for Azure to communicate with Azure services, a provider for Kubernetes to create Kubernetes services, and Helm to access Helm charts for Prometheus and Grafana.


terraform {
 required_providers {
   azurerm = {
     source = "hashicorp/azurerm"
     version = "3.107.0"
   }


   kubernetes = {
     source = "hashicorp/kubernetes"
     version = "2.30.0"
   }


   helm = {
     source = "hashicorp/helm"
     version = "2.13.2"
   }
 }
}
Enter fullscreen mode Exit fullscreen mode

After that i ran terraform init so terraform downloads the plugin and the neccessary dependencies needed.
Next creating resource groups and the Kubernetes cluster

resource "azurerm_resource_group" "aks-resource" {
 name     = "aks-resources"
 location = "France Central"
}


resource "azurerm_kubernetes_cluster" "test_cluster" {
 name                = "example-aks1"
 location            = azurerm_resource_group.aks-resource.location
 resource_group_name = azurerm_resource_group.aks-resource.name
 dns_prefix          = "testaks"


 default_node_pool {
   name       = "default"
   node_count = 2
   vm_size    = "Standard_D2_v2"
 }


 identity {
   type = "SystemAssigned"
 }


 tags = {
   Environment = "Production"
 }
}
Enter fullscreen mode Exit fullscreen mode

Created a resource group with the name aks-resource in the “France Central” region and an Azure Kubernetes Service(AKS) named “example-aks1” within that resource group. The AKS cluster will have a default node pool with 2(which can be increased to suit your need) nodes of size “Standard_D2_v2” and it will use a system-assigned managed identity. The cluster is also tagged “Environment: Production”.

Once this is done, I ran the terraform plan command to see the resources that will be provisioned then terraform apply to provision these resources.

The reason I did this was, in other to get the kubeconfig file I have to run the command
az aks get-credentials --resource-group <ResourceGroupName> --name <AKSClusterName> (replaced ResourceGroupName and AKSClusterName with the name of my resource group and my cluster name)

The kubeconfig file is used to allow access to the Kubernetes clusters. It contains the necessary details to connect to the cluster, such as cluster API server addresses, user credentials, and namespaces.

This file is used by Kubectl and other Kubernetes client applications to communicate with the cluster’s API server and manage Kubernetes resources.

Essentially, it’s like a key that allows you to access and control your Kubernetes cluster on the cloud.

After running the az aks get-credentials --resource-group <ResourceGroupName> --name <AKSClusterName> command the terminal outputs where the kubeconfig file is saved.

data "azurerm_kubernetes_cluster" "test_cluster" {
 name = azurerm_kubernetes_cluster.test_cluster.name
 resource_group_name = azurerm_resource_group.aks-resource.name
}
Enter fullscreen mode Exit fullscreen mode

The data block is used to fetch data about an existing aks cluster. It retrieves information about the aks cluster with the specified name and resource group, which is to be used somewhere else in the terraform configuration

resource "local_file" "kubeconfig" {
 content = data.azurerm_kubernetes_cluster.test_cluster.kube_config_raw
 filename = "/home/ephraim/.kube/config"
 }
Enter fullscreen mode Exit fullscreen mode

The resource "local_file" "kubeconfig" block creates a local file that contains the kubeconfig of the retrieved AKS cluster. This kubeconfig is necessary to interact with my Kubernetes cluster using kubectl or other Kubernetes tools.

The content of the file is the raw kubeconfig data from the AKS cluster, and it’s being saved to a specified path on my local machine (/home/ephraim/.kube/config)

resource "null_resource" "wait_for_kubeconfig" {
 provisioner "local-exec" {
   command = "sleep 10"


 }


 depends_on = [ local_file.kubeconfig ]
 }


provider "kubernetes" {
 config_path = local_file.kubeconfig.filename
}


provider "helm" {
   kubernetes {
     config_path = local_file.kubeconfig.filename
   }
}
Enter fullscreen mode Exit fullscreen mode

The null resource block is used to introduce a delay in the terraform execution, which pauses the execution for 10 seconds to ensure that the kubeconfig file is fully written.
The provider "kubernetes" block configures the Kubernetes provider for Terraform, which allows me to manage my Kubernetes resources with Terraform. It uses the kubeconfig file created by the local_file.kubeconfig resource to connect to my AKS cluster.
Similarly, the provider "helm" block configures the Helm provider, which lets me deploy Helm charts to my Kubernetes cluster. It also uses the same kubeconfig file for connectivity.

resource "kubernetes_namespace" "test_namespace" {
 metadata{
   name = "monitoring"
 }


 depends_on = [ local_file.kubeconfig ]
}
Enter fullscreen mode Exit fullscreen mode

Resource “kubernetes namespace” creates a kubernetes namespace called “monitoring”

A Kubernetes namespace is a way to divide cluster resources between multiple users. It is some sort of cluster within the Kubernetes cluster but for similar tasks or similar resources. Or let me say, it is used for grouping similar cluster resources to bolster organisation.
In this namespace, I’m going to be provisioning Prometheus and Grafana.
The “depends_on” attribute ensures that the namespace is not created until the local_file.kubeconfig file is applied. This means that Terraform will wait for the kubeconfig file to be available before it attempts to create the namespace

resource "helm_release" "prom-helm" {
   name = "prometheus"
   repository = "https://prometheus-community.github.io/helm-charts"
   chart      = "prometheus"
   namespace  = kubernetes_namespace.test_namespace.metadata[0].name
   depends_on = [ kubernetes_namespace.test_namespace ]
}


resource "helm_release" "graf-helm" {
   name = "grafana"
   repository = "https://grafana.github.io/helm-charts"
   chart      = "grafana"
   namespace  = kubernetes_namespace.test_namespace.metadata[0].name
   depends_on = [ kubernetes_namespace.test_namespace ]
 }
Enter fullscreen mode Exit fullscreen mode

The resource "helm_release" "prom-helm" block will deploy Prometheus from the specified Helm chart repository. It sets the release name to Prometheus, uses the chart from the Prometheus community Helm repository, and deploys it to the monitoring namespace created by the kubernetes_namespace.test_namespace resource.

The resource "helm_release" "graf-helm" block does the same for Grafana, deploying it from the Grafana Helm chart repository with the release name Grafana to the same monitoring namespace.
Both resources have a depends_on attribute that ensures they are created after the monitoring namespace has been created. Since they are both being deployed in the monitoring namespace.

resource "kubernetes_deployment" "nginx_depl" {
   metadata {
     name = "nginx-deployment"
     namespace = kubernetes_namespace.test_namespace.metadata[0].name
   }
   spec {
     replicas = 2


     selector {
       match_labels = {
           app = "nginx"
       }
     }


     template {
       metadata {
         labels = {
           app = "nginx"
         }
       }


       spec {
         container {
           name = "nginx"
           image = "nginx:latest"


           port {
             container_port = 80
           }
         }
       }
     }
   }
   depends_on = [ kubernetes_namespace.test_namespace ]
}
Enter fullscreen mode Exit fullscreen mode

This defines a Kubernetes deployment named “nginx-deployment” that will be created in the monitoring namespace. This deployment will set up two NGINX pods running in my Kubernetes cluster within the monitoring namespace serving content on port 80.
The depends_on attribute also makes sure it is created only after the namespace has been created

resource "kubernetes_service" "nginx-service" {
   metadata {
     name = "nginx-service"
     namespace = kubernetes_namespace.test_namespace.metadata[0].name
   }


   spec {
     selector = {
       app = "nginx"
     }


     port {
       port = 80
       target_port = 80
     }


     type = "LoadBalancer"
   }
   depends_on = [ kubernetes_namespace.test_namespace]
}
Enter fullscreen mode Exit fullscreen mode

This resource creates a Kubernetes service called “nginx-service” also in the monitoring namespace of type LoadBalancer. This also depends on the namespace created earlier
the full code looks like this

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "3.107.0"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.30.0"
    }

    helm = {
      source = "hashicorp/helm"
      version = "2.13.2"
    }
  }
}

provider "azurerm" {
  # Configuration options
  features {}
}


resource "azurerm_resource_group" "aks-resource" {
  name     = "aks-resources"
  location = "France Central"
}

resource "azurerm_kubernetes_cluster" "test_cluster" {
  name                = "example-aks1"
  location            = azurerm_resource_group.aks-resource.location
  resource_group_name = azurerm_resource_group.aks-resource.name
  dns_prefix          = "testaks"

  default_node_pool {
    name       = "default"
    node_count = 2
    vm_size    = "Standard_D2_v2"
  }


  identity {
    type = "SystemAssigned"
  }

  tags = {
    Environment = "Production"
  }
}

data "azurerm_kubernetes_cluster" "test_cluster" {
  name = azurerm_kubernetes_cluster.test_cluster.name
  resource_group_name = azurerm_resource_group.aks-resource.name
}

resource "local_file" "kubeconfig" {
  content = data.azurerm_kubernetes_cluster.test_cluster.kube_config_raw
  filename = "/home/ephraim/.kube/config"

}

resource "null_resource" "wait_for_kubeconfig" {
  provisioner "local-exec" {
    command = "sleep 10"

  }

  depends_on = [ local_file.kubeconfig ]

}

provider "kubernetes" {
  config_path = local_file.kubeconfig.filename
}

provider "helm" {
    kubernetes {
      config_path = local_file.kubeconfig.filename
    }
}


resource "kubernetes_namespace" "test_namespace" {
  metadata{
    name = "monitoring"
  }

  depends_on = [ local_file.kubeconfig ]
}


resource "helm_release" "prom-helm" {
    name = "prometheus"
    repository = "https://prometheus-community.github.io/helm-charts"
    chart      = "prometheus"
    namespace  = kubernetes_namespace.test_namespace.metadata[0].name
    depends_on = [ kubernetes_namespace.test_namespace ]
}

resource "helm_release" "graf-helm" {
    name = "grafana"
    repository = "https://grafana.github.io/helm-charts"
    chart      = "grafana"
    namespace  = kubernetes_namespace.test_namespace.metadata[0].name
    depends_on = [ kubernetes_namespace.test_namespace ]

}

resource "kubernetes_deployment" "nginx_depl" {
    metadata {
      name = "nginx-deployment"
      namespace = kubernetes_namespace.test_namespace.metadata[0].name
    }
    spec {
      replicas = 2

      selector {
        match_labels = {
            app = "nginx"
        }
      }

      template {
        metadata {
          labels = {
            app = "nginx"
          }
        }

        spec {
          container {
            name = "nginx"
            image = "nginx:latest"

            port {
              container_port = 80
            }
          }
        }
      }
    }
    depends_on = [ kubernetes_namespace.test_namespace ]
}


resource "kubernetes_service" "nginx-service" {
    metadata {
      name = "nginx-service"
      namespace = kubernetes_namespace.test_namespace.metadata[0].name
    }

    spec {
      selector = {
        app = "nginx"
      }

      port {
        port = 80
        target_port = 80
      }

      type = "LoadBalancer"
    }
    depends_on = [ kubernetes_namespace.test_namespace]
}

output "client_certificate" {
  value     = azurerm_kubernetes_cluster.test_cluster.kube_config[0].client_certificate
  sensitive = true
}

output "kube_config" {
  value = azurerm_kubernetes_cluster.test_cluster.kube_config_raw

  sensitive = true
}
Enter fullscreen mode Exit fullscreen mode

Deployment

To deploy the finalized infrastructure to Azure, I will need to run terraform plan to preview the resources that will be created. Following this, I'll execute terraform apply to provision these resources. This process may take some time.

Verify Deployment

To verify the successful deployment, I navigated to the ‘Connect’ tab in the cluster portal, where Azure provides commands for authentication and connection to my cluster. After executing these commands, I successfully connected to my cluster. To view all my deployments, I ran the command kubectl get deployments --namespace monitoring

Everything seems to be up and running correctly

Additionally, I needed to verify if the Nginx service was set up correctly. Once Nginx, of type LoadBalancer, had been deployed, Kubernetes provisioned an external IP. To access it, I ran the command kubectl get svc nginx-service --namespace monitoring. In the ‘External IP’ column, I copied and pasted the IP address into a browser. Nginx is running correctly!

Ran “terraform destroy” to delete and remove all resources

Challenges

The major challenge I faced was obtaining and using the kubeconfig file. I later realized that I needed to create the resource group and cluster first, then retrieve the kubeconfig file, before proceeding to create the other resources.

Conclusion

In this article, I’ve walked through the process of deploying a Kubernetes cluster on Azure Kubernetes Service (AKS) using Terraform. I deployed an AKS cluster and configured Kubernetes resources, including Nginx, Prometheus, and Grafana for my application’s needs

To take this further I intend to explore topics such as auto-scaling, continuous deployment pipelines, and multi-region clusters to further enhance your Kubernetes infrastructure.

the github repo for the full code can be found Here

Top comments (0)