DEV Community

Cover image for Multi-Node Kubernetes Cluster Setup with KIND
Avesh
Avesh

Posted on

Multi-Node Kubernetes Cluster Setup with KIND

Setting up a multi-node Kubernetes cluster is crucial for testing and simulating production-grade environments. Kubernetes in Docker (KIND) provides a lightweight and straightforward way to deploy multi-node clusters on your local machine using Docker containers as cluster nodes. This guide walks you through the process of creating a multi-node Kubernetes cluster using KIND with hands-on examples.


What is KIND?

KIND (Kubernetes IN Docker) is a tool that runs Kubernetes clusters inside Docker containers. It is primarily used for:

  • Testing Kubernetes clusters locally.
  • Simulating multi-node setups.
  • Building and testing Kubernetes controllers or applications.

Why Use KIND?

  • Lightweight and easy to set up.
  • No need for virtual machines.
  • Perfect for local development and testing.

Prerequisites

  1. Docker installed on your machine.
  2. KIND installed. Install it using go or download a pre-built binary:
   curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
   chmod +x ./kind
   sudo mv ./kind /usr/local/bin/kind
Enter fullscreen mode Exit fullscreen mode
  1. kubectl installed for interacting with the Kubernetes cluster:
   curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
   chmod +x kubectl
   sudo mv kubectl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Step 1: Define the Multi-Node KIND Cluster Configuration

Create a configuration file for your multi-node cluster. For example, save the following YAML as kind-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker
networking:
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Control Plane: Manages the cluster (scheduler, API server, etc.).
  • Workers: Nodes that run your application workloads.
  • Networking: Configures the API server endpoint for local access.

Step 2: Create the KIND Cluster

Run the following command to create your multi-node cluster:

kind create cluster --config kind-config.yaml --name multi-node-cluster
Enter fullscreen mode Exit fullscreen mode

Expected Output:

Creating cluster "multi-node-cluster" ...
 โœ“ Ensuring node image (kindest/node:v1.28.0) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ ๐Ÿ“ฆ ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-multi-node-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-multi-node-cluster
Enter fullscreen mode Exit fullscreen mode

Step 3: Verify the Cluster

Check the nodes in the cluster:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Expected Output:

NAME                          STATUS   ROLES           AGE     VERSION
multi-node-cluster-control-plane   Ready    control-plane   2m25s   v1.28.0
multi-node-cluster-worker          Ready    <none>          2m10s   v1.28.0
multi-node-cluster-worker2         Ready    <none>          2m10s   v1.28.0
Enter fullscreen mode Exit fullscreen mode

Step 4: Deploy a Sample Application

Create a sample deployment and service to verify the cluster setup. Save the following YAML as nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Apply the configuration:

kubectl apply -f nginx-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5: Access the Application

List services to get the service's endpoint:

kubectl get services
Enter fullscreen mode Exit fullscreen mode

Expected Output:

NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP      10.96.0.1       <none>        443/TCP        5m
nginx-service    LoadBalancer   10.96.42.123    localhost     80:30001/TCP   2m
Enter fullscreen mode Exit fullscreen mode

Access the application by navigating to http://localhost:30001 in your browser.


Step 6: Simulate Node-Specific Workloads

You can deploy workloads to specific nodes using node selectors. Update the deployment to target the worker nodes. Edit the nginx-deployment.yaml:

spec:
  template:
    spec:
      nodeSelector:
        kubernetes.io/hostname: multi-node-cluster-worker
Enter fullscreen mode Exit fullscreen mode

Apply the changes:

kubectl apply -f nginx-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Step 7: Clean Up

When you're done, delete the cluster:

kind delete cluster --name multi-node-cluster
Enter fullscreen mode Exit fullscreen mode

Best Practices for KIND Clusters

  1. Resource Limits: Ensure Docker has enough resources allocated (CPU and memory).
  2. Ingress Setup: Use the KIND ingress addon for testing routing rules.
  3. Cluster Customization: Leverage KINDโ€™s configuration options for advanced networking and storage setups.
  4. Continuous Testing: Integrate KIND clusters into CI/CD pipelines for testing.

Conclusion

With KIND, setting up a multi-node Kubernetes cluster is simple and effective for local testing. Itโ€™s a lightweight solution that enables developers and DevOps engineers to test workloads, configurations, and networking in a simulated multi-node environment. Follow this hands-on guide to deploy your own clusters and enhance your Kubernetes skills!

Top comments (1)

Collapse
 
favethetechlady profile image
Favour Chuku

NICE ONE