Doing daily tasks in Kubernetes with Terraform might not be ideal, but when deploying a new cluster you would at least want to have some of your standard applications running right from the start. Using Helm charts to install these is pretty nifty and saves you a lot of time.
Just recently had my first go with setting up Helm charts with Terraform, and it didn't go all according to plan. I had some issues with setting up the provider, and later deploying the charts themselves. The later turns out that even when uninstalling applications through Helm, it wouldn't remove everything so the installation just timed out. That's a story for another day, though.
The reason I wanted to write down a walkthrough of setting up Helm with Terraform, is both so that anyone else could benefit from it but also as an exercise to help me remember how I managed to get it working.
I assume that you already know what Helm is, and that you know how to set up Kubernetes and Terraform. Be aware that I write this in 0.12 syntax, and you will get errors running some of this with Terraform 0.11 and earlier.
Set up the helm provider
First, as always, we have to set up the provider. The documentation gives us two examples on how to authenticate to our cluster, through the normal kubeconfig or by statically define our credentials. Using the kubernetes config probably works fine, but we wanted to set up the cluster and install helm charts in the same process. We also wanted this to be able to run through a CI/CD pipeline, so referring to any types of config was not going to cut it.
The documentation example looks like this:
provider "helm" {
kubernetes {
host = "https://104.196.242.174"
username = "ClusterMaster"
password = "MindTheGap"
client_certificate = file("~/.kube/client-cert.pem")
client_key = file("~/.kube/client-key.pem")
cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
}
}
This looks fine, but we don't have all of this information or files until the cluster is created. Since this will be running in the same workflow as the one that is creating the cluster, we need to be referring to the resource element. Also, username and password was optional so we tried without them first and had no issues there.
provider "helm" {
version = "~> 0.10.4"
kubernetes {
host = azurerm_kubernetes_cluster.k8s.kube_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
}
}
The code above is from my Terraform and Kubernetes example that I use for my talk on Terraform. Feel free to look at the entire code at Github.
I've been working with Azure Kubernetes Services (AKS), so in my case we have created a AKS cluster with the local name of k8s that we can extrapolate the host, client certificate, client key and cluster CA certificate from.
We are now ready to deploy helm charts by using the helm_release resource!
Taking the helm
Oh, the jokes. Pretty naughtical (nautical, get it?) ...
Dad jokes aside, it's time to install something through helm. We do this by using the helm_release resource, which can look a bit like this:
resource "helm_release" "prometheus" {
name = "prometheus"
chart = "prometheus-operator"
repository = "https://kubernetes-charts.storage.googleapis.com/"
namespace = "monitoring"
}
The chart is the official Stable chart from the fine people over at Helm, but anything that is supported through the helm CLI will work here as well.
Most likely, you would want to send some configurations with your helm chart. There are two ways of doing this, either by defining a values file or by using a set value block. There aren't any real benefits of one or the other but I guess that if you only have one setting you want to pass along then creating an entire values file for that would be unnecessary.
Using our above example, here is how to structure the values file and/or using the set value block.
resource "helm_release" "prometheus" {
name = "prometheus"
chart = "prometheus-operator"
repository = "https://kubernetes-charts.storage.googleapis.com/"
namespace = "monitoring"
# Values file
values = [
file("${path.module}/values.yaml")
]
# Set value block
set {
name = "global.rbac.create"
value = "false"
}
}
Other settings worth noting
wait - (Optional) Will wait until all resources are in a ready state before marking the release as successful. It will wait for as long as timeout. Defaults to true.
timeout - (Optional) Time in seconds to wait for any individual kubernetes operation (like Jobs for hooks). Defaults to 300 seconds.
recreate_pods - (Optional) Perform pods restart during upgrade/rollback. Defaults to false.
atomic - (Optional) If set, installation process purges chart on fail. The wait flag will be set automatically if atomic is used. Defaults to false.
Top comments (0)