DEV Community

Akarshan Gandotra
Akarshan Gandotra

Posted on • Edited on

Deploying RabbitMQ in Kubernetes with Custom Docker Image

Hello Devs,

As you all know RabbitMQ is the most commonly used message broker. If you are looking to leverage RabbitMQ in Kubernetes this is the right place. RabbitMQ in containerized environment helps in making messaging infrastructure more flexible, scalable and manageable.

In this blog I will how to deploy RabbitMQ in Kubernetes with Custom Docker Image.

Prerequisites

Make sure you have the following before beginning:

  • A Kubernetes cluster
  • The 'kubectl' command-line tool to access the cluster.

1. RabbitMQ Dockerfile

Let's begin by developing a Dockerfile to build a custom RabbitMQ image. Running the containers as non-root gives you an extra layer of security. Below is an example of a Dockerfile that adds a non-root user and sets the ownership of the RabbitMQ data directory:

# Specify the base image
FROM rabbitmq:3.11.15-alpine

# Add a user named appuser and set ownership of the RabbitMQ data directory
RUN addgroup -S foouser && adduser -S foouser -G foouser
RUN mkdir -p "/var/lib/rabbitmq"
RUN chown -R foouser:foouser /var/lib/rabbitmq

VOLUME ["/var/lib/rabbitmq"]

# enable RabbitMQ management plugin
RUN rabbitmq-plugins enable --offline rabbitmq_management

# set timezone to UTC
RUN apk add --no-cache tzdata && \
    cp /usr/share/zoneinfo/UTC /etc/localtime && \
    echo "UTC" > /etc/timezone && \
    apk del tzdata

# remove 'apk-tools'
RUN apk --purge del apk-tools

# expose ports for RabbitMQ and RabbitMQ management
EXPOSE 5672 15672

# Switch to the foouser user
USER foouser

ENV RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq.conf

# Set the entrypoint to rabbitmq-server
CMD ["rabbitmq-server"]
Enter fullscreen mode Exit fullscreen mode

Save the above content as Dockerfile in a directory.

2. Kubernetes Configuration

The RabbitMQ deployment is managed as a stateful set, ensuring stable and ordered deployment of RabbitMQ nodes. Persistent Volume Claims (PVCs) are used to provide durable storage for RabbitMQ data, and a headless service is created to enable direct access to each RabbitMQ node.

2.1 Setting RabbitMQ conf in Configmap

ConfigMap is responsible for storing the configuration data for the RabbitMQ deployment. It includes the following key-value pairs:

enabled_plugins: A list of enabled RabbitMQ plugins, including rabbitmq_peer_discovery_k8s and rabbitmq_prometheus.

rabbitmq.conf: The RabbitMQ configuration file that specifies various settings, including the Kubernetes cluster details, queue master locator, and load definitions.

definitions.json: The definitions file that defines users, permissions, vhosts, and queues.

apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq-configmap
  # Add labels and annotations as per your requirements
data:
  definitions.json: |-
    # RabbitMQ definitions go here
  enabled_plugins: '[rabbitmq_peer_discovery_k8s, rabbitmq_prometheus].'
  rabbitmq.conf: |-
    # RabbitMQ configuration options go here
Enter fullscreen mode Exit fullscreen mode

2.2 Deploying RabbitMQ as StatefulSet

The rabbitmq StatefulSet is responsible for deploying and managing the RabbitMQ pods. It includes the following components:

  1. containers: The main container named rabbitmq runs the RabbitMQ server. It includes the specified ports, liveness and readiness probes, volume mounts for configuration and data, and resource limits and requests.

  2. volumeClaimTemplates: The template for creating the PersistentVolumeClaims (PVCs) used by the RabbitMQ pods. Each pod will have its own PVC named rabbitmq-data with a requested storage size of 100Mi and access mode set to ReadWriteMany. We need PVC to persist data so that when the pod restarts the same data can be used by the pod.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  # Add labels and annotations as per your requirements
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
        - name: rabbitmq
          image: <your-docker-image>  # Replace with your custom Docker image
          ports:
            - name: amqp
              containerPort: 5672
              protocol: TCP
            - name: http
              containerPort: 15672
              protocol: TCP
            - name: prometheus
              containerPort: 15692
              protocol: TCP
           livenessProbe:
              exec:
                command:
                - rabbitmq-diagnostics
                - status
              initialDelaySeconds: 60
              periodSeconds: 60
              timeoutSeconds: 15
           readinessProbe:
              exec:
                command:
                - rabbitmq-diagnostics
                - ping
              initialDelaySeconds: 20
              periodSeconds: 60
              timeoutSeconds: 10
          volumeMounts:
            - name: rabbitmq-config
              mountPath: /etc/rabbitmq
            - name: rabbitmq-data
              mountPath: /var/lib/rabbitmq/mnesia
      volumes:
        - name: rabbitmq-config
          configMap:
            name: rabbitmq-configmap 
# Specify the name of the PVC to be used for RabbitMQ data
  volumeClaimTemplates:
    - metadata:
        name: rabbitmq-data
      spec:
        accessModes:
          - ReadWriteMany  # Adjust based on your requirements
        resources:
          requests:
            storage: 10Gi  # Adjust the storage size based on your needs
Enter fullscreen mode Exit fullscreen mode

RabbitMQ is deployed as a StatefulSet in Kubernetes to ensure stable and predictable network identities and persistent storage for each instance of the RabbitMQ broker. As a stateful application, RabbitMQ relies on stable network identities and requires durable storage for its message queues and metadata. By using a StatefulSet, Kubernetes assigns a unique and stable hostname to each RabbitMQ pod, allowing for reliable communication and clustering between the instances. Additionally, StatefulSets provide built-in support for persistent volumes, ensuring that RabbitMQ data is preserved even during pod restarts or scaling operations.

2.3 Headless Service

The rabbitmq-headless Service provides network connectivity to the RabbitMQ pods. It is of type ClusterIP: None, which means it is headless and does not have a cluster IP.

Copy code
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-headless
  # Add labels and annotations as per your requirements
spec:
  clusterIP: None
  ports:
    - name: amqp
      port: 5672
      protocol: TCP
      targetPort: 5672
    - name: http
      port: 15672
      protocol: TCP
      targetPort: 15672
    - name: prometheus
      port: 15692
      protocol: TCP
      targetPort: 15692
  selector:
    # Specify the labels to match the RabbitMQ pods
Enter fullscreen mode Exit fullscreen mode

Deploying RabbitMQ as a headless service provides a scalable and dynamic approach to enable pod to pod communication, ensuring efficient and reliable messaging.
It allows RabbitMQ pods to communicate and balance the load among themselves without relying on ingress or load balancer.

Save the above YAML files in separate files with the respective names (configmap.yaml, statefulset.yaml, service.yaml).

3. Deploy RabbitMQ in Kubernetes

Now that we have the Dockerfile and Kubernetes configuration ready, let's deploy RabbitMQ in Kubernetes using the following commands:

kubectl apply -f configmap.yaml
kubectl apply -f service.yaml
kubectl apply -f statefulset.yaml
Enter fullscreen mode Exit fullscreen mode

This will create the necessary resources and deploy RabbitMQ in your Kubernetes cluster. You can check the status and logs of the RabbitMQ pods using the kubectl command.

Happy messaging with RabbitMQ in Kubernetes! 🎉

Top comments (1)

Collapse
 
rogeriolino profile image
Rogério Alencar Lino Filho

in the volumes.configMap.name entry has a typo (missing n in rabbitmq-cofigmap)