DEV Community

Cover image for 👋 Why we say GOODBYE to HELM 👋
Philip Miglinci for Glasskube

Posted on • Updated on • Originally published at glasskube.eu

👋 Why we say GOODBYE to HELM 👋

TL;DR 🔍

In this blog post, we address our issues withwith Helm, a popular Kubernetes deployment tool, discussing shortcomings like Custom Resource Definition upgrades, dependency management, user-unfriendly chart creation, values.yaml complexity, and the inability to interact with the Kubernetes API post-installation.


We Want Your Feedback! 🫶

Share your thoughts in the comments below! Let us know what topics you'd like more content on. If this guide helps, click on the cat and leave a star to support us in creating more developer-centric content. Your feedback matters!

Glasskube Github

5 reasons we are trying to build the next generation of deployment automation for Kubernetes.

Do you want to follow our journey? Star Glasskube on GitHub: glasskube/glasskube

As a seasoned DevOps engineer, I found that Helm, the popular Kubernetes deployment tool, has some shocking shortcomings.
In this post I want to discuss some of those which, in my opinion, require a new vision of a more modern deployment solution.
If you never heard about Helm before, in a nutshell it is:

A framework for packaging Kubernetes resources (apps) to charts, publish them and let them easily be installed via a command line interface.

The goal of this post is not to hate on the smart and talented people who built Helm, but maybe we can kindle a productive and healthy discussion about where we need to go as the DevOps industry to stay relevent in the coming years. But to fully undestand the following, I think it is important to understand what developments lead us to where we are today.
So, before we start, let's quickly dive into the history of Helm.

In 2015, a company called Deis created Helm, a package manager for Kubernetes.
Deis is now part of Azure Kubernetes Service but the original project still exists as Helm Classic. At the same time, Google had a project called Kubernetes Deployment Manager, which was similar to Google Deployment Manager but for kubernetes resources, rather than GCS resources.

At the beginning of 2016, the two projects decided to merge, which resulted in the release of Helm v2 later that year.
Helm v2 consists of a client and server component (helm and tiller, respectively), where the latter was the continuation of the original Kubernetes Deployment Manager project.
Tiller was designed to handle deployment states to make it easier for multiple users to use Helm without interfering with each other.

In 2018, Helm launched the Helm Hub as a central place to discover charts which are otherwise found in distributed repositories.
Helm Hub was rebranded to Artifact Hub in 2020.

With the release of Kubernetes 1.6, which had Role Based Access Control (RBAC) enabled by default, production deployments of Helm became more difficult, because of the many security policies that were required by tiller.
So, people started to experiment with a new approach that could do the same thing without requiring a server component, which resulted in the release of Helm v3 in 2019.

As you can see, Helm has a very rich history. It became the gold-standard of packaging apps for Kubernetes and is used by DevOps engineers all over the world. But just because Helm is the biggest player on the field, it doesn't mean that it is without flaws. So then, why say goodbye to Helm?

1. Helm doesn't provide a mechanism for upgrading Custom Resource Definitions

helm does provide a method of packaging Custom Resource Definitions (CRDs) by placing them in a dedicated crds directory, but these are ignored during upgrades!
This is intentional and designed to prevent accidental data loss.
Therefore, upgrading a chart does not automatically upgrade it's associated CRDs, which is unexpected for many engineers and leads to more manually involved and error-prone upgrade procedures and other anti-patterns.

To combat this major design flaw, chart developers have come up with several strategies, the most popular of which are:

  • Putting the CRDs into the chart's template directory
  • Creating separate sub-charts just for CRDs

An alternative way to overcome this shortcoming is to not invoke Helm commands directly, but rather use a CI/CD solution, like Flux.
Flux provides a setting to automatically update CRDs during a Helm upgrade, but it is off by default.

2. Helm dependency management

The way to specify a dependency in a Helm chart is to reference it as a sub-chart.
This method can work great for tightly coupled dependencies that you might want to install separately or as part of another chart, but it has some weaknesses that are important to understand:

  • Sub-charts are always installed in the same namespace as the primary release and there is no way to change this.
  • There exists no mechanism to share a dependency between two releases.

For example, our Glasskube Operator Helm Chart depends on kube-prometheus-stack, velero and a bunch of other dependencies, some of which are already installed in many Kubernetes clusters.
To provide an installation experience that is as simple as possible, the chart references all those dependencies as sub-charts, but using this approach, all those dependencies are bundled in the Glasskube Operator release and can not be changed or updated separately.
Additionally, there is no way to check whether a dependency is already installed, so a user might end up with two separate installations of the same Helm chart!

Ideal tooling would allow chart developers to specify external dependencies and simply ensure that those are present in a cluster before a chart can be installed.
This way, dependencies could be shared among consumers.
This is how package managers for operating systems work since forever. Why does Kubernetes need to be different?

3. Helm chart creation is not user-friendly

So far, we discussed problems that affect you as a chart user.
But what does the situation look like for chart developers?

Well, let's start by creating a new chart.
This can be achieved by calling helm create your-chart.
I invite you to quickly open a terminal, run this command and go through all the files it creates.
As I'm sure you will agree, it's… a lot.
I still remember the moment when I wanted to create my first Helm chart and saw the results of this command thinking, “this can't be right.”

.:
total 8,0K
drwxr-xr-x. 2 kosmoz kosmoz   40  7. Dez 13:23 charts/
-rw-r--r--. 1 kosmoz kosmoz 1,2K  7. Dez 13:23 Chart.yaml
drwxr-xr-x. 3 kosmoz kosmoz  200  7. Dez 13:23 templates/
-rw-r--r--. 1 kosmoz kosmoz 1,9K  7. Dez 13:23 values.yaml

./charts:
total 0

./templates:
total 28K
-rw-r--r--. 1 kosmoz kosmoz 1,9K  7. Dez 13:23 deployment.yaml
-rw-r--r--. 1 kosmoz kosmoz 1,8K  7. Dez 13:23 _helpers.tpl
-rw-r--r--. 1 kosmoz kosmoz  925  7. Dez 13:23 hpa.yaml
-rw-r--r--. 1 kosmoz kosmoz 2,1K  7. Dez 13:23 ingress.yaml
-rw-r--r--. 1 kosmoz kosmoz 1,8K  7. Dez 13:23 NOTES.txt
-rw-r--r--. 1 kosmoz kosmoz  326  7. Dez 13:23 serviceaccount.yaml
-rw-r--r--. 1 kosmoz kosmoz  370  7. Dez 13:23 service.yaml
drwxr-xr-x. 2 kosmoz kosmoz   60  7. Dez 13:23 tests/

./templates/tests:
total 4,0K
-rw-r--r--. 1 kosmoz kosmoz 388  7. Dez 13:23 test-connection.yaml
Enter fullscreen mode Exit fullscreen mode

In total, helm create generates 10 files in different sub-directories and it is not immediately apparent which ones of those are actually essential for a chart and which ones are just example code.
I once complained about this to a DevOps engineer who had already created dozens of charts and they laughed and said:

“The first step in creating a chart is running helm create. The second is deleting 90% of the results.”

Really? That's the best we can do?
Okay, let's accept that and say you figured out the structure of your new chart.
Now, you probably want to add some resources.
Of course you can just drop your existing YAML files into the chart's templates directory, but you're probably interested in using some parameters from your values.yaml in your resources.
After all, that would kind of be the point of creating a Helm chart in the first place.
To look at an example, go back to your terminal (where, previously, you created your Helm chart) and check out the file templates/serviceaccount.yaml:

{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ include "your-chart.serviceAccountName" . }}
  labels:
    {{- include "your-chart.labels" . | nindent 4 }}
  {{- with .Values.serviceAccount.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Now, I know what you're thinking:

That doesn't look like the YAML I know!
What is ' include, toYaml and nindent and what's up with all the - and {{ and |?

It's true, although Helm template files use the file name extension for YAML, they are actually just templates.
Helm templates are based on the Go template language which is very flexible and powerful but doesn't really know anything about YAML or Kubernetes.
That's why it's necessary to call lots of these conversion functions inside the template files.

As a result many popular charts end up with template files that contain more template language stuff than actual YAML.
This makes them hard to read and maintain, especially as someone who wasn't involved in it's creation.

4. The values.yaml file is an antipattern

Now, let's go back to something that's a little more tangible for you as a Helm user.
As a Kubernetes application developer who writes resources as YAML files, you are probably used to having rich support in your development environment, including strict schema validation and super comprehensive autocomplete.
Creating a values.yaml file for a chart release is a little different.
See, there is no general schema for what goes and doesn't go inside a values.yaml file.
Thus, your development environment cannot help you beyond basic YAML syntax highlighting.
The only way to verify if your values.yaml file is valid is to run it through Helm and see what happens.
Using helm template allows the you to render these Helm templates which detects possible errors in the configuration file.

A lot of chart developers want to give users the possibility to fine tune most aspects of final deployment.
As a result, the number of possibilities for configuration is often unreasonably large and complicated, mimicking the actual resources they want to create, but without any schema validation!

5. Inability to interact with the Kubernetes API

We already discussed 4 shortcomings of Helm, but in my opinion the biggest downside of Helm is this:
A Helm release is strictly a one-shot operation.
Once a Helm release is successfully installed, Helm's job is done.
But here's the thing: Installing an application is usually not the hard part, maintaining an installation and keeping it running is.
Unfortunately, Helm doesn't really help you with that.

After finishing installation of a release, Helm can not perform any additional changes due to it's design as a strictly client-side application.
This inability to interact with the release during later stages of a release's life-cycle means that helm as a deployment method inherently static, but modern software deployments are often required to be very dynamic.

Detecting the cloud environment:

private val dynamicCloudProvider
    get() = when {
        kubernetesClient.configMaps().inNamespace("kube-system").withName("shoot-info")
            .get()?.data?.get("extensions")?.contains("shoot-dns-service") == true ->
            CloudProvider.gardener

        kubernetesClient.nodes().withLabel("eks.amazonaws.com/nodegroup").list().items.isNotEmpty() ->
            CloudProvider.aws

        kubernetesClient.nodes().withLabel("csi.hetzner.cloud/location").list().items.isNotEmpty() ->
            CloudProvider.hcloud

        else ->
            CloudProvider.generic
    }
Enter fullscreen mode Exit fullscreen mode

Applying configurations based on the environment:

protected val defaultIngressClassName: String?
    get() = when (configService.cloudProvider) {
        CloudProvider.aws -> "alb"
        else -> configService.ingressClassName
    }

protected fun getDefaultAnnotations(primary: P, context: Context<P>): Map<String, String> =
    configService.getCommonIngressAnnotations(primary) +
        when (configService.cloudProvider) {
            CloudProvider.aws -> awsDefaultAnnotations
            CloudProvider.gardener -> gardenerDefaultAnnotations
            else -> getCertManagerDefaultAnnotations(context) + ingressNginxDefaultAnnotations
        }
Enter fullscreen mode Exit fullscreen mode

Conclusion

Although many developers are a bit scared of Helm at first, it's simple design gave Helm the lead it currently holds in the space of Kubernetes deployment methods.
Helm is currently the de-facto standard for managing complex application deployments, but that doesn't mean we shouldn't challenge it's design and point out shortcomings.
New requirements for applications will require more dynamic deployment methods and we DevOps engineers and application developers must be prepared.

This is why we started Glasskube: An easier way to deploy applications and infrastructure on Kubernetes. If you want to follow our progress make sure to star: glasskube/glasskube


Glasskube Github

Top comments (9)

Collapse
 
matijasos profile image
Matija Sosic

Expertly written, a perfect mix of useful knowledge and memes :)

Collapse
 
debadyuti profile image
Deb

Great work. What are the top 3 reasons preventing developers to move from helm over to glasskube?

Collapse
 
nevodavid profile image
Nevo David

Great article!
Thank you for sharing!

Collapse
 
marisogo profile image
Marine

Nice to see people experience with some product and how it can sometimes inspire to do better!

Collapse
 
jacktt profile image
JackTT

But helm is the preferred method when install glasskube operator =))

Collapse
 
chiefy profile image
Christopher "Chief" Najewicz

The Glasskube operator is simply deployed via Helm.

Collapse
 
gmeligio profile image
Eligio Mariño

Awesome! Thanks for the article. Can you do a similar comparison of Glasskube to Timoni timoni.sh/ ?

Collapse
 
biplobsd profile image
Biplob Sutradhar

Gifs and cover awesome 👍

Collapse
 
erikauer profile image
erikauer

Thanks for sharing your experience and opinion with helm. Great article!