In the world of EKS or Kubernetes, managing external access to services can be a complex task. This is where Ingress Controllers come into play. In this blog, let's delve deep into Ingress Controllers, a critical component that acts as an entry point for external traffic into your Kubernetes cluster.
Understanding Ingress Controllers
An Ingress Controller is a specialized Kubernetes pod that interprets and enforces the rules defined in the Ingress Resource. It acts as a traffic manager, handling incoming requests and directing them to the appropriate services.
Why Use an Ingress Controller?
Ingress Controllers simplify the process of managing external access to services in a Kubernetes environment. They provide a single entry point for multiple services, support SSL termination, and enable virtual host and path-based routing.
Both NGINX Ingress Controller and AWS ALB Ingress Controller serve as valuable tools for managing external access to services within an AWS EKS cluster. The choice between them depends on specific use cases, requirements, and preferences. Here are some scenarios where choosing NGINX Ingress Controller might be advantageous over ALB Controller:
- Advanced Routing Capabilities: NGINX provides powerful features for routing and load balancing, including URL rewrites, header-based routing, and session affinity. If your application requires complex traffic management, NGINX offers more granular control.
- Rich Set of Annotations: NGINX Ingress Controller offers a wide range of annotations for fine-grained control over how traffic is routed and managed. This gives you flexibility in configuring your application.
- Community and Ecosystem: NGINX has a large and active open-source community, which means a wealth of resources, tutorials, and community-contributed enhancements are available. Support for Non-HTTP Protocol
- SSL/TLS Termination
- Variety of Load Balancing Algorithms
- Non-AWS Environments
- Path-Based Routing: If you require path-based routing (e.g., routing requests to different services based on the URL path), NGINX supports this natively.
NGINX Ingress Controller Architecture
The NGINX Ingress Controller is a specialized piece of software designed to handle incoming HTTP and HTTPS traffic in a Kubernetes environment. It extends Kubernetes' functionality by adding custom resources and controllers to manage external access to services.
Components of NGINX Ingress Controller:
- Ingress Resource: In Kubernetes, an Ingress is a native resource that defines rules for external access to services. It specifies how HTTP or HTTPS traffic should be directed to the services in the cluster.
- Controller Pod: The NGINX Ingress Controller runs as a Pod within the Kubernetes cluster. This controller watches for changes to Ingress resources and dynamically updates the NGINX configuration accordingly.
- NGINX Server: The NGINX server is the core component of the controller. It is a high-performance, open-source web server known for its efficiency in handling concurrent connections and processing HTTP requests.
- Configuration File: The NGINX server is configured based on information gathered from the Kubernetes API server. This information includes details about Ingress resources, services, and endpoints.
- Custom Resource Definitions (CRDs): NGINX Ingress Controller introduces Custom Resource Definitions (CRDs) to extend the Kubernetes API with resources specific to handling HTTP and HTTPS traffic. This includes Ingress, VirtualServer, etc.
- Upstream Servers: NGINX acts as a reverse proxy, directing incoming requests to the appropriate upstream servers (pods) based on the rules defined in the Ingress resource.
- Health Checks
- Service Discovery
Workflow of NGINX Ingress Controller:
- Ingress Resource Creation: A Kubernetes user creates an Ingress resource that defines how external traffic should be routed to services within the cluster.
- Controller Watches for Changes: The NGINX Ingress Controller pod constantly monitors the Kubernetes API server for changes to Ingress resources.
- Configuration Update: When a change is detected, the controller updates the NGINX configuration file with the new routing rules specified in the Ingress resource.
- NGINX Reload: The NGINX server within the controller pod is signaled to reload its configuration. This ensures that the new routing rules take effect without disrupting ongoing connections.
- Traffic Routing: NGINX now handles incoming traffic based on the updated configuration. Requests are routed to the appropriate backend services.
Lets build
- Lets use AWS Cloud 9 (because its simple to use). Open Cloud 9 notebook/environment.
-
Install and Configure eksctl- Install eksctl, a command-line tool for creating and managing EKS clusters:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv -v /tmp/eksctl /usr/local/bin eksctl version # Enable eksctl bash-completion eksctl completion bash >> ~/.bash_completion . /etc/profile.d/bash_completion.sh . ~/.bash_completion # Enable kubectl bash_completion kubectl completion bash >> ~/.bash_completion . /etc/profile.d/bash_completion.sh . ~/.bash_completion # Verify the installation: eksctl version
-
Create an EKS Cluster- Use the following command to create an EKS cluster named my-eks-cluster in the us-east-1 region, with a managed node group of type t2.micro consisting of 2 nodes (adjust as needed).
eksctl create cluster \ --name my-eks-cluster \ --version 1.24 \ --region us-east-1 \ --nodegroup-name standard-workers \ --node-type t2.micro \ --nodes 2 \ --nodes-min 1 \ --nodes-max 3 \ --managed
-
Configure kubectl
aws eks --region us-east-1 update-kubeconfig --name my-eks-cluster
Verify the configuration to see the nodes in your EKS cluster:
kubectl get nodes
-
Deploy NGINX Ingress Controller
Install the NGINX Ingress Controller using Helm:
# Step 1: Deploying NGINX Ingress Controller kubectl create namespace nginx-ingress helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install nginx-ingress ingress-nginx/ingress-nginx -n nginx-ingress --set controller.replicaCount=2 --set controller.nodeSelector."kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux --set controller.admissionWebhooks.enabled=false # Output # NAME: nginx-ingress # LAST DEPLOYED: Mon Sep 12 12:34:56 2023 # NAMESPACE: nginx-ingress # STATUS: deployed # REVISION: 1 # TEST SUITE: None # Step 2: Verifying NGINX Ingress Controller Deployment kubectl get pods -n nginx-ingress # Output # NAME READY STATUS RESTARTS AGE # nginx-ingress-controller-6f77d5bb6f-8cph9 1/1 Running 0 45s # nginx-ingress-controller-6f77d5bb6f-q9t2l 1/1 Running 0 45s
-
Deploy Sample Application
Create a sample deployment and service for testing app.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: app1-deployment spec: replicas: 2 selector: matchLabels: app: app1 template: metadata: labels: app: app1 spec: containers: - name: app1 image: hashicorp/http-echo args: - "-text" - "Hello, this is App 1!" ports: - containerPort: 5678 --- apiVersion: v1 kind: Service metadata: name: app1-service spec: selector: app: app1 ports: - protocol: TCP port: 80 targetPort: 5678
Apply this manifest to create the deployment and service:
kubectl apply -f app.yaml
-
Create an Ingress Resource
Create a file named ingress.yaml with the following content:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: complex-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-hash: "sha1" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/proxy-body-size: "16m" nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" alb.ingress.kubernetes.io/subnets: "subnet-XXXX,subnet-XXXX" alb.ingress.kubernetes.io/security-groups: "sg-XXXXXXX" spec: rules: - host: example.com http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-service port: number: 80 ingressClassName: nginx
Apply this manifest to create the Ingress resource:
kubectl apply -f ingress.yaml
The NGINX Ingress Controller creates an AWS Application Load Balancer (ALB) or Network Load Balancer (NLB) under the hood based on how it is configured. Checking if NGINX Ingress Controller Creates AWS ALB or NLB: Check AWS ALB Console OR use AWS CLI to list the load balancers associated with your account
aws elbv2 describe-load-balancers
Update Security Groups and Subnets
Ensure that the security groups associated with your EKS nodes allow traffic on port 80. Additionally, verify that the subnets have proper route tables allowing internet access.Access the Application
Add the IP of your EKS cluster or its domain to your local hosts file, pointing to the LoadBalancer's IP:
echo "<LoadBalancer-IP> example.com" | sudo tee -a /etc/hosts
Now, you can access http://example.com/app1 and http://example.com/app2 in your web browser. If everything is configured correctly, you should see the NGINX welcome page.
Note to replace with the actual IP of your LoadBalancer. This setup demonstrates how to deploy an EKS cluster, install NGINX Ingress Controller, and deploy a sample application with an Ingress resource for routing.
Conclusion: Mastering Ingress Control in AWS EKS
Effectively managing incoming traffic is key to a well-functioning Kubernetes cluster. With the NGINX Ingress Controller and either AWS ALB or NLB, you have powerful tools at your disposal. By following the steps outlined in this guide, you can confidently set up and optimize your Ingress Controller configuration in AWS EKS. Happy Ingress Controlling!
Top comments (0)