DEV Community

Michael Levan
Michael Levan

Posted on

The 4 C’s Of Kubernetes Security

Much like any other system/platform, Kubernetes has a lot of components to secure. Everything from where it’s running to how it’s running to the containers running inside of it. Without proper security in a Kubernetes environment, there are a lot of holes that potential attackers and bad actors can get through from the infrastructure layer all the way down to the code layer.

That’s where the 4 C’s come into play.

The 4C’s are almost like the paradigm of success. The majority of security practices in the Kubernetes realm fall under one of the four categories.

In this blog post, you’ll learn about the 4 C’s: Cloud, Clusters, Containers, and Code.

Cloud

The 4 C’s start with the top-down approach. The “top” of today’s world is primarily the cloud.

💡 Not all cloud-native workloads run in the cloud, remember that. Therefore, you’ll want to think about your infrastructure. Luckily, the way you secure a cloud has a lot of similarities to on-prem, so this section will still help if you’re on-prem.

Securing the cloud is something that many engineers, CISOs, and researchers still talk about even though it’s been out for such a long time. The reason why is that there are a tremendous amount of holes in the cloud.

The majority of default settings within the cloud are actually huge security risks. For example, in a lot of the clouds, Managed Kubernetes Service offerings by default have a public IP address. In the world of security and production, this is a big no-no. If your cluster is pubic, that means anyone can access it from anywhere (with the right auth of course). This is why the whole “shift left” thing, as buzzy as it is, made a lot of sense to implement. It’s as easy as clicking a few buttons or writing a little code to get a cluster sitting on the public-facing internet.

Think about securing the cloud in a few different ways:

  • Least privilege: Ensure that only the people who need access have access, and most importantly, ensure they only have the access they need.
  • Automation: Whatever methods you’re using to deploy workloads, ensure that they are vetted out. The last thing you want is to run some Terraform code in production that exposes workloads.
  • Misconfiguration: If you read research, you’ll know that the majority of security issues (80-90%) come from misconfigurations. It’s incredibly simple to misconfigure cloud environments. Truthfully, cloud providers make it incredibly simple. A misconfiguration could be anything from the wrong line of code to a port accidentally open.
  • Networking: Ports, firewall rules, routes, encryption for the network - these are all things that are incredibly important in the cloud. Do not minimize the need for proper network security practices.
  • Cloud Best Practices: All of the major cloud providers that the majority of organizations run their workloads on have a security reference architecture. Ensure that you read it and take their recommendations into serious consideration.

Clusters

When thinking about Kubernetes, you have two levels - the cluster and the “internal network”. The cluster itself will be running on a network, have its own IP’s, etc., and so will the Pods. The Pods also have their own network (with CNI) that needs to be taken into consideration. Point being, you have two layers to protect and more often than not, only one of those layers is truly protected.

Clusters are made up of Control Planes and Worker Nodes. Control Planes are where all of the major Kubernetes components that make it work live. Worker Nodes are where workloads like Pods live. If you’re in the cloud, the Control Plane is abstracted away from you, but you still need to think about it. For example, even though you don’t manage the API server, you still have to upgrade Kubernetes versions. Even though you don’t manage Etcd, you should still back it up and encrypt it.

Think about securing clusters in a few different ways:

  • RBAC: Much like the cloud, you want to ensure that only the people who need access to clusters have access. You’ll see engineers that only need access to certain environments, so they shouldn’t have access to all environments.
  • Host Networking: As we’ve discussed, don’t do things like give your cluster a public IP address. Ensure that it’s behind a VPN of sorts and you can route to it. Another big thing here is firewall rules. If you deploy VMs with Kubeadm, you’ll need to open certain ports manually for the Control Plane and Worker Nodes. You want to make sure you don’t just open all ports.
  • Cluster Scanning: There are several tools out there like kube-bench and Kubescape that allow you to scan your cluster. It scans against popular security databases like CIS and NVD to ensure that your cluster is running with the absolute best practices.
  • Isolating Cluster Components: If you’re running on-prem, you should think about isolation for the Control Plane packages. You can put Etcd on its own server which would allow you to isolate the Kubernetes database, encrypt it at rest, and secure those VMs a bit differently. The isolation gives you more options than putting all Kubernetes packages under one roof.

Containers

Pods are technically the smallest layer of Kubernetes, but Pods contain containers, and containers are what contain your code. Whether it’s a frontend app, a backend app, middleware, a job, a script, or whatever else that you wrote in code, you’re going to end up containerizing it so it runs on Kubernetes.

💡 We can now run VMs on Kubernetes, so technically your code could be running there, but chances are you’ll be using containers unless you have a particular reason to run VMs.

Pods can contain one container, which is typically your application code, or they could contain two containers which are called sidecar containers. A Typical sidecar container consists of something like a Service Mesh or a log aggregator. It’s typically some time of third-party enhancement to make your life a bit easier or to implement a necessary workload. Because Pods can contain multiple containers and the containers contain code, they’re a huge target for bad actors. One wrong line of code or one wrong open port and an attacker can use a Pod to destroy the entire environment.

As an example, Pods are deployed with Kubernetes Manifests. The Manifests either contain a Service Account that you’re using to deploy the Pods or a default Service Account. If you don’t specify a Service Account, that means the default is used. If the default is used and it gets compromised, that means every Pod that used it is compromised.

Think about securing containers in a few different ways:

  • Base Image: All container images start with a base image. You always want to ensure that you know the base image, scan it, and ensure its security. For example, run a security scan against a popular base image that’s maintained. It’s guaranteed that you’ll find some type of vulnerability. Smaller form factor-based images like Scratch or Alpine.
  • Scanning: Scan, scan, and scan some more. There are so many tools out there right now that are both paid and open-source that you can use to scan clusters and Pods for vulnerabilities. It’s as easy as running a command on a terminal.
  • Pod Security Features: Pods have the ability to secure a lot of pieces of the overall deployment. For example, Security Contexts allow you to manage what users can run the Pod, what permissions they have, access control, filter process calls, and a lot more. Aside from that, you also have Network Policies that allow you to block ingress and egress traffic for Pods. You can also scan Pods with various tools.

Code

It all starts at the code ironically enough.

You can scan a cluster, ensure proper firewall rules, use a security-centric CNI, scan the Pods, and block the container access as much as possible with SecurityContexts, but if the underlying code has security bugs, those security bugs will still be the biggest attack vector. Ensuring that the code running inside of the container image which is then being deployed in a container to run in production is crucial. It’s very much a bottom-up approach.

The problem with the code part of this whole thing is chances are if you’re in DevOps, Platform Engineering, or Cloud Engineering, you’re probably not writing the application code. This is why again, as mentioned, the whole “shift left” thing actually made sense before it became a huge buzzy marketing term that’s now not fun to hear. If engineers could work with the developers from the start before the code was even packaged, it would help mitigate these security issues from the start.

Think about securing containers in a few different ways:

  • Teamwork: Work with the developers who are writing the code. Let them know that you want to help out as much as possible with the security. It sounds like a lot of work, but it’s going to save you time later on.
  • Scanning: Much like cluster and Pod scanning tools, there are a lot of methods to scan code. Whether it’s standard libraries, tools like Sonarqube, or other open-source solutions, there are a ton of methods to scan code.
  • Linting: Security linters are great for not only ensuring best practices, but stopping you from implementing security bugs. For example, Go (golang) has the gosec package which is a security linter and it’s quite effective.
  • Automated QA: All of the scanning is great, but it’s a cumbersome task if you’re going to do it manually. Your best bet is to put the scanners in the CI portion of your pipeline before the container image is built. That way, any bugs can be found prior to containerization.

Top comments (0)