- Initial thoughts
- Helm chart Installation/update
- 1. Set NGINX as the default ingress controller
- 2. Set SSL/TLS termination on AWS load balancer
- 3. Use professional error pages
- 4. Redirect users HTTP calls to HTTPS port
- 5. Rewrite internal redirects from HTTP to HTTPS
- 6. Set a valid default certificate when TLS is terminated on NGINX Ingress Controller
- 7. Allow large file transfer
- 8. Autoscale the ingress controller
- 9. Stick user session to the same targeted pod
- 10. Have access to real client IP in applications
- 11. Set maintenance mode
- 12. Disable NGINX access logs for a particular service
- 13. Whitelist IP for a particular Ingress
- Wrapping up
- Further reading
Initial thoughts
Kubernetes is an open-source container orchestration platform used to manage and automate the deployment and scaling of containerized applications. It has gained popularity in recent years due to its ability to provide a consistent experience across different cloud providers and on-premises environments.
The NGINX ingress controller is a production‑grade ingress controller that runs NGINX Open Source in a Kubernetes environment. The daemon monitors Kubernetes ingress resources to discover requests for services that require ingress load balancing.
In this article, we will dig into the versatility and simplicity of this ingress controller to implement different common use cases. You will find some other ones in different articles (such as Kubernetes NGINX Ingress: 10 Useful Configuration Options) but none of them has both described and regrouped the ones below, yet these are widely used for web applications in production.
These apply to multiple Cloud providers, at least AWS, GCP and OVHCloud, except when a specific Cloud provider is mentioned.
These are also fully compatible with each other, except when architecture differ (for example, TLS termination on load balancer versus termination on NGINX pods).
As future experiences demand, we'll augment its content with additional use cases, ensuring its relevance continues to flourish
Helm chart Installation/update
Everything in the YAML snippets below — except for ingress configuration — relates to configuring the NGINX ingress controller. This includes customizing the default configuration.
To begin, make sure your Helm distribution is aware of the chart using this command:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo update
After preparing or updating your custom nginx.helm.values.yml
file, deploy or update the Helm deployment using this command:
helm -n system upgrade --install ngx ingress-nginx/ingress-nginx --version 4.3.0 --create-namespace -f nginx.helm.values.yml
Replace 4.3.0 with the latest version found on ArtifactHUB, and proceed according to your upgrade strategy.
1. Set NGINX as the default ingress controller
By default, you have to specify the class in each of your ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
spec:
ingressClassName: nginx
But if you have a single ingress controller in your cluster, just configure it to be the default:
nginx.helm.values.yml
controller:
ingressClass:
create: true # default
setAsDefaultIngress: true
No more need for the ingressClassName
field. Ever.
2. Set SSL/TLS termination on AWS load balancer
By default with Kubernetes incoming traffic, SSL/TLS termination has to be handled by target application, one by one. Another application, another TLS termination handling, with certificate handling.
A simple yet powerful way of abstracting TLS handling is to terminate on load balancer, and have HTTP inside the cluster by default.
As a pre-requisite, you have to request a public ACM certificate in AWS.
Once you have the certificate ARN, use it in below configuration under service.beta.kubernetes.io/aws-load-balancer-ssl-cert
annotation:
nginx.helm.values.yml
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:94xxxxxxx:certificate/2c0c2512-a829-4dd5-bc06-b3yyyyy
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" # If you don't specify this annotation, controller creates TLS listener for all the service ports
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
3. Use professional error pages
By default, NGINX ingress controller gives you neutral yet boring error pages:
These ones can be replaced with nice polished and animated ones, such as this one from tarampampam's repository:
It has some nice side features, like automatic light/dark modes, and routing details displayable fore debugging purpose.
Examples from multiple themes are showcased here for everyone to choose from.
Once you found your theme, configure your favorite ingress controller:
nginx.helm.values.yml
controller:
config:
custom-http-errors: 404,408,500,501,502,503,504,505
# Prepackaged default error pages from https://github.com/tarampampam/error-pages/wiki/Kubernetes-&-ingress-nginx
# multiple themes here: https://tarampampam.github.io/error-pages/
defaultBackend:
enabled: true
image:
repository: ghcr.io/tarampampam/error-pages
tag: 2.21 # latest as of 01/04/2023 here: https://github.com/tarampampam/error-pages/pkgs/container/error-pages
extraEnvs:
- name: TEMPLATE_NAME
value: lost-in-space # one of: app-down, cats, connection, ghost, hacker-terminal, l7-dark, l7-light, lost-in-space, matrix, noise, shuffle
- name: SHOW_DETAILS # Optional: enables the output of additional information on error pages
value: "false"
4. Redirect users HTTP calls to HTTPS port
Once you have all your web routes configured to handled SSL/TLS/HTTPS, HTTP routes have no reason to be, and is even dangerous to keep, security-wise.
Instead of disabling the port, which can be annoying to your users, you can automatically redirect HTTP to HTTPS with this configuration:
nginx.helm.values.yml
controller:
containerPort:
http: 80
https: 443
tohttps: 2443 # from https://github.com/kubernetes/ingress-nginx/issues/8017
service:
enableHttp: true
enableHttps: true
targetPorts:
http: tohttps # from https://github.com/kubernetes/ingress-nginx/issues/8017
https: https
# Will add custom configuration options to Nginx ConfigMap
config:
# from https://github.com/kubernetes/ingress-nginx/issues/8017
http-snippet: |
server{
listen 2443;
return 308 https://$host$request_uri;
}
use-forwarded-headers: "true" # from https://github.com/kubernetes/ingress-nginx/issues/1957
5. Rewrite internal redirects from HTTP to HTTPS
When you terminate your TLS on the load balancer or the ingress controller, application does not know of the TLS incoming calls: everything inside the cluster is HTTP. Hence, when an application needs to redirect you somewhere else inside the cluster to another path, it might try to redirect you on HTTP, same as it received.
For each ingress redirecting internally, apply this configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: auth-server
annotations:
nginx.ingress.kubernetes.io/proxy-redirect-from: http://
nginx.ingress.kubernetes.io/proxy-redirect-to: https://
spec:
# [...]
6. Set a valid default certificate when TLS is terminated on NGINX Ingress Controller
When you don't have the option to terminate TLS on load balancer, NGINX Ingress Controller can be used to do the TLS termination. It would be too long to detail here, if needed you can find litterature on internet, such as kubernetes + ingress + cert-manager + letsencrypt = https, or Installing an NGINX Ingress controller with a Let's Encrypt certificate manager, or else How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
When this scenario is in place, each ingress route get it's own certificate, it can be the same certificate. It can also be the same secret, if the services are in the same namespace.
But the default NGINX certificate, for non-configured routes, will still be the NGINX auto-signed certificate.
To fix that, you can reuse a matching wildcard certificate that you already have somewhere in the cluster, generated using Cert-Manager. NGINX ingress controller can be configured to target it, even from another namespace:
nginx.helm.values.yml
controller:
extraArgs:
default-ssl-certificate: "my-namespace/my-certificate"
7. Allow large file transfer
By default, the NGINX ingress controller allow a maximum of 1 Mb payload transfer.
For each ingress route where you need more, apply this configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 100m
[...]
8. Autoscale the ingress controller
Eventually, the traffic of your web application will grow, and the ingress controller initial configuration may become obsolete.
One way to do an easy autoscaling is using a daemonset, one pod for each node:
nginx.helm.values.yml
controller:
kind: DaemonSet # Deployment or DaemonSet
Another way is autoscaling on NGINX CPU and memory:
nginx.helm.values.yml
controller:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 200
targetMemoryUtilizationPercentage: 200
If this is not sufficient, gather your incoming connections metrics and autoscale based on them. This needs complex operations, so we just forward you to the excellent article Autoscaling Ingress controllers in Kubernetes by Daniele Polencic
9. Stick user session to the same targeted pod
Applications in Kubernetes cluster must be mostly stateless, by often there is still an ephemeral session depending on the pod the user is reaching. If the users ends up on another pod, the session can be disrupted. In this case we need a "sticky session".
The enabling of sticky sessions is on the ingress side:
nginx.helm.values.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
# sticky session, from documentation: https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent" # change to "balanced" (default) to redistribute some sessions when scaling pods
nginx.ingress.kubernetes.io/session-cookie-name: "name-distinguishing-services"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" # in seconds, equivalent to 48h
[...]
10. Have access to real client IP in applications
By default for managed load balancers, the client IP visible to your application is not the one from the real client.
You can have it defined in X-Real-Ip
request header by setting this NGINX ingress controller configuration:
For AWS:
nginx.helm.values.yml
controller:
service:
externalTrafficPolicy: "Local"
Or for OVHCloud, from official documentation:
nginx.helm.values.yml
controller:
service:
externalTrafficPolicy: "Local"
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
proxy-real-ip-cidr: "xx.yy.zz.aa/nn"
This will be effective on Helm install
but not always on upgrade
, depending on the status of your release ; sometimes you have to edit the NGINX LoadBalancer service to define the value in spec.externalTrafficPolicy
, and then restart NGINX pods to use the config part (targeting the configmap).
More information in Kubernetes Documentation.
11. Set maintenance mode
You may have already wondered how you could have your users know that you are currently deploying, to help them patiently wait for your website to be available again.
There are multiple lightweight ways to do that, and some of them involve NGINX ingress controller.
DevOps Directive has made an awesome job in this field described the article Kubernetes Maintenance Page. The solution uses a dedicated deployment + a service without any custom Docker image, that you can target with any ingress during maintenance.
12. Disable NGINX access logs for a particular service
In cases where you're dealing with a massively used ingress that's drowning out your NGINX logs, there's a solution. This often crops up in development environments, especially when a high-frequency tool like an APM server comes into play. These tools trigger frequent calls, even during idle user moments.
To combat this, leverage the nginx.ingress.kubernetes.io/enable-access-log
annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apm-server
labels:
app: apm-server
annotations:
nginx.ingress.kubernetes.io/enable-access-log: "false"
spec:
rules:
- host: apm.my-app.com
13. Whitelist IP for a particular Ingress
To restrict access to a particular Ingress per source IP, you can set NGINX whitelist-source-range annotation with some IPs and/or CIDR. For example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-direct
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 10.0.0.0/24,172.10.0.1
spec:
rules:
- host: my.website.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: webapp
port:
number: 80
Wrapping up
We have covered multiple NGINX ingress controller use-cases for web application, that we can use for a large variety of situations.
If you think one or two others are common and missing here, don't hesitate to comment in the section below 🤓
Illustrations generated locally by Automatic1111 using Lyriel model
Further reading
☸️ Kubernetes: A Convenient Variable Substitution Mechanism for Kustomize
Benoit COUETIL 💫 for Zenika ・ Aug 4
☸️ Why Managed Kubernetes is a Viable Solution for Even Modest but Actively Developed Applications
Benoit COUETIL 💫 for Zenika ・ Jun 5
☸️ Kubernetes: From Your Docker-Compose File to a Cluster with Kompose
Benoit COUETIL 💫 for Zenika ・ Mar 9
☸️ Kubernetes: A Pragmatic Kubectl Aliases Collection
Benoit COUETIL 💫 for Zenika ・ Jan 6
☸️ Web Application on Kubernetes: A Tutorial to Observability with the Elastic Stack
Benoit COUETIL 💫 for Zenika ・ Nov 27 '23
☸️ Kubernetes: Awesome Maintained Links You Will Keep Using Next Year
Benoit COUETIL 💫 for Zenika ・ Sep 4 '23
☸️ Managed Kubernetes: Our Dev is on AWS, Our Prod is on OVHCloud
Benoit COUETIL 💫 for Zenika ・ Jul 1 '23
☸️ How to Deploy a Secured OVHCloud Managed Kubernetes Cluster Using Terraform in 2023
Benoit COUETIL 💫 for Zenika ・ May 5 '23
☸️ How to Deploy a Cost-Efficient AWS/EKS Kubernetes Cluster Using Terraform in 2023
Benoit COUETIL 💫 for Zenika ・ Jun 3 '23
☸️ FinOps EKS: 10 Tips to Reduce the Bill up to 90% on AWS Managed Kubernetes Clusters
Benoit COUETIL 💫 for Zenika ・ Apr 20 '21
This article was enhanced with the assistance of an AI language model to ensure clarity and accuracy in the content, as English is not my native language.
Top comments (11)
Thanks for the blogpost!
Concerning the 1st point, "Set NGINX as the default ingress controller": the
kubernetes.io/ingress.class
annotation is deprecated since Kubernetes 1.18, prefer useingressClassName
.One tip, if you love logs and metrics dashboard, you can change default logs configuration, with json support and geo-ip + maxmind, and enable metrics:
Thank you Remi for your valuable insights 🤗
I will update with the
ingressClassName
.For the tip, what backend do you have in mind for logs/metrics ? ELK ? I have yet to test that part 🤓
On my side, I'm using Loki & Prometheus ;)
You can get some dashboard from here, need to build one for the logs (you can still explore them).
Thanks 🙏
Article updated with
ingressClassName
✌️Thanks for share your experience ;)
Thanks, I appreciate your feedback 🤗
Don't hesitate to share some use cases if you think they are missing and deserve a place in the list 😉
Many thanks to @K8SArchitect for spreading this article on twitter yesterday 🤗
So if I want to deploy to a registered domain, is this the approach to use?
Do you already have a kubernetes cluster, or are you trying to evaluate if Kubernetes + NGINX is the right approach ?
Can you give more info about your context ?
i want to deploy a java backend and angular frontend, i have the nginx configurations setup already in my dockerfile. i have a registered domain to use and i want to deploy to aws using terraform+jenkins+docker+kubernetes, therefore i want to know if the approach in your tutorial is applicable to my task?
If you have a domain and a Kubernetes cluster, yes, it is applicable. The fact that you have NGINX conf in your Dockerfile (for the frontend), may not be relevant : When using NGINX Ingress Controller, in general we remove NGINX specificites inside application.