In the two previous articles, we discovered how to build and run Keycloak with a Distroless base image in a Kubernetes cluster. The previously seen configuration was Ok for one instance, but the clustering capabilities of Keycloak was not used, which can cause some problems.
Keycloak has a built-in clustering mode, based on Wildfly & Infinispan. To activate it, some start-up scripts are using environment values to set up everything for you… and of course, those scripts are bash
based, not compatible with our version of Keycloak. Here, we will see how to configure this and deploy it to Kubernetes
standalone-ha.xml
extraction
We will use the same strategy seen before to generate the standalone-ha.xml
, by running the official image with parameters we want to use and extract the file with docker cp
command line. Let's see:
# In the first shell
# Creation of a docker network
first-shell$ docker network create keycloak-network
4da77163731b584bef2c6d0b00386b9d62e31fa216204c6c6795f66e109ba1a6
# Launching PostgreSQL linked to the network previously created
first-shell$ docker run --rm -d --name postgres --net keycloak-network \
-e POSTGRES_DB=keycloak \
-e POSTGRES_USER=keycloak \
-e POSTGRES_PASSWORD=password postgres
229816da42707e772542f1b089c616a2333a6fbe1aea2be7efe658d6f2c934a1
first-shell$ docker run -it --rm --name keycloak \
-e DB_ADDR=postgres \
-e DB_USER=keycloak \
-e DB_PASSWORD=password \
-e KEYCLOAK_USER=foo \
-e KEYCLOAK_PASSWORD=bar \
-e JGROUPS_DISCOVERY_PROTOCOL="dns.DNS_PING" \
-e JGROUPS_TRANSPORT_STACK=tcp \
-e JGROUPS_DISCOVERY_PROPERTIES="dns_query=keycloak-headless" \
--net keycloak-network jboss/keycloak:13.0.1
=========================================================================
Using PostgreSQL database
=========================================================================
19:15:45,322 INFO [org.jboss.modules] (CLI command executor) JBoss Modules version 1.11.0.Final
19:15:45,389 INFO [org.jboss.msc] (CLI command executor) JBoss MSC version 1.4.12.Final
19:15:45,399 INFO [org.jboss.threads] (CLI command executor) JBoss Threads version 2.4.0.Final
19:15:45,542 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: Keycloak 13.0.1 (WildFly Core 15.0.1.Final) starting
...
19:16:23,596 INFO [org.jboss.as.server] (ServerService Thread Pool -- 46) WFLYSRV0010: Deployed "keycloak-server.war" (runtime-name : "keycloak-server.war")
19:16:23,671 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
19:16:23,679 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: Keycloak 13.0.1 (WildFly Core 15.0.1.Final) started in 25820ms - Started 692 of 978 services (686 services are lazy, passive or on-demand)
19:16:23,685 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
19:16:23,686 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
You can see we add some extra parameters for the clustering mode, based on JGROUPS
. Some details are in the docker official documentation but you will find more in the keycloak server installation documentation.
The simplest solution to set up cluster mode in a Kubernetes environment is to use DNS_PING
over TCP
. This is why we defined the following environment values in the previous shell
example:
-
JGROUPS_DISCOVERY_PROTOCOL="dns.DNS_PING"
to activateDNS_PING
. -
JGROUPS_TRANSPORT_STACK=tcp
to activate clustering overTCP
. -
JGROUPS_DISCOVERY_PROPERTIES="dns_query=keycloak-headless"
to provide a way to find other instance (we will describe it in the next paragraph).
Then, in another shell, we will steal again the standalone-ha.xml
.
NOTE: In the previous article, we were targeting the standalone.xml
, the HA
version contains a more robust configuration for our use case in a cluster mode.
second-shell$ docker cp keycloak:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml .
second-shell$ ls
standalone-ha.xml
# We can now stop the keycloak container
second-shell$ docker stop keycloak
keycloak
second-shell$
NOTE: If you want to set up other parameters, you can use this method for almost everything 🤩.
If we look into the standalone-ha.xml
file, we can see an important configuration for our clustering mode:
<!-- standalone-ha.xml -->
<subsystem xmlns="urn:jboss:domain:jgroups:8.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="dns.DNS_PING">
<property name="dns_query">keycloak-headless</property>
</protocol>
<protocol type="MERGE3"/>
<socket-protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG3"/>
</stack>
</stacks>
</subsystem>
This file will configure Keycloak to find other instances through the DNS_PING
protocol. In fact, Keycloak will forge a DNS request to find IPs behind the domain name keycloak-headless
… easy as pie!
Kubernetes deployment
Keycloak is ready for clustering mode, but we have to adapt our deployment to allow this specific configuration where each instance can communicate to each other.
The first modification is at deployment level, to expose some extra ports dedicated to instance-to-instance communication:
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
template:
spec:
containers:
- name: keycloak
ports:
# Standard HTTP port used by keycloak
- containerPort: 8080
protocol: TCP
# Port used by Jgroups to communicate
- containerPort: 7600
protocol: TCP
To work well, Jgroups has to be bound to the Pod IP. In Kubernetes world, we usually don't know the Pod IP in advance, so we will have to inject the Pod IP in the deployment and use it in the args
part, like below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
spec:
template:
spec:
containers:
- name: keycloak
args:
- "-D[Standalone]"
- "-server"
- "-Xms64m"
- "-Xmx512m"
- "-XX:MetaspaceSize=96M"
- "-XX:MaxMetaspaceSize=256m"
- "-Djava.net.preferIPv4Stack=true"
- "-Djboss.modules.system.pkgs=org.jboss.byteman"
- "-Djava.awt.headless=true"
- "--add-exports=java.base/sun.nio.ch=ALL-UNNAMED"
- "--add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED"
- "--add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED"
- "-Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log"
- "-Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties"
- "-jar"
- "/opt/jboss/keycloak/jboss-modules.jar"
- "-mp"
- "/opt/jboss/keycloak/modules"
- "org.jboss.as.standalone"
- "-Djboss.home.dir=/opt/jboss/keycloak"
- "-Djboss.server.base.dir=/opt/jboss/keycloak/standalone"
# Note we have changed the command here to use the standalone-ha.xml file
- "-c=standalone-ha.xml"
- "-b=0.0.0.0"
- "-bprivate=0.0.0.0"
- "-bmanagement=0.0.0.0"
# Thanks to the Kubernetes interpolation, we are able to launch the app
# with a custom parameter for each pods.
- '-Djgroups.bind_addr=$(HOST_IP)'
env:
# the HOST_IP environment value is populated by Kubernetes with
# the current Pod IP coming from `status.podIP`.
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
With those modifications, Keycloak will be able to work in cluster mode… but it won't be able to find any other instances 😔. We have to add a way to discover other instances 💇♀️!
Headless Service to the rescue!
In Kubernetes, usually we are using Service to expose one domain name with multiple instances of an application behind it. In our case, we want to be able to fetch every IPs behind a domain name, and this is what Headless Service is for!
apiVersion: v1
kind: Service
metadata:
name: keycloak-headless
spec:
# Important parameter to discover every instance even before its complete startup
publishNotReadyAddresses: true
clusterIP: None
ports:
- name: ping
port: 7600
targetPort: 7600
selector:
app: keycloak
Thanks to this, every DNS query made by Jgroups on the domaine keycloak-headless
will result to the complete list of Keycloak pod IPs in namespace!
Demo time!
We will deploy and scale our keycloak application and see clustering mode in action. The kustomization.yaml
is similar to version in the second part of this series:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: keycloak
resources:
- keycloak.yaml
- database.yaml
configMapGenerator:
- name: keycloak
files:
- standalone-ha.xml
- name: database
literals:
- user=keycloak
- name=keycloak
secretGenerator:
- name: database
literals:
- password=sPCwZjuq8CMvrBn7
When we deploy it, we will have the following result:
$ kubectl apply -k .
configmap/database-56h9f7gfdh created
configmap/keycloak-k97c6gkct6 created
secret/database-8g8gk22d26 created
service/database created
service/keycloak-headless created
service/keycloak created
deployment.apps/database created
deployment.apps/keycloak created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
database-5dcc69b7b6-m48h9 1/1 Running 0 7s
keycloak-7f5f7bd8c6-7s2br 0/1 Running 0 7s
# After few seconds…
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
database-5dcc69b7b6-m48h9 1/1 Running 0 67s
keycloak-7f5f7bd8c6-7s2br 1/1 Running 0 67s
If we look at the Keycloak logs, everything looks good. We can scale it up and see if clustering mode do its job:
$ kubectl scale deploy/keycloak --replicas=2
deployment.apps/keycloak scaled
Now, in the log of the previously running instance, we can see the following messages:
$ kubectl logs keycloak-7f5f7bd8c6-7s2br
20:05:51,480 INFO [org.infinispan.CLUSTER] (thread-19,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=actionTokens] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 10
20:05:51,480 INFO [org.infinispan.CLUSTER] (thread-27,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=offlineSessions] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 10
20:05:51,480 INFO [org.infinispan.CLUSTER] (thread-28,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=authenticationSessions] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,482 INFO [org.infinispan.CLUSTER] (thread-12,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=offlineClientSessions] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 10
20:05:51,471 INFO [org.infinispan.CLUSTER] (thread-25,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=clientSessions] ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 9
20:05:51,486 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t2) [Context=sessions] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,493 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t2) [Context=offlineSessions] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,493 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t1) [Context=loginFailures] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 10
20:05:51,499 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t1) [Context=actionTokens] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,503 INFO [org.infinispan.CLUSTER] (thread-28,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=offlineClientSessions] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,506 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t2) [Context=clientSessions] ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 10
20:05:51,512 INFO [org.infinispan.CLUSTER] (non-blocking-thread--p6-t2) [Context=loginFailures] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
20:05:51,522 INFO [org.infinispan.CLUSTER] (thread-28,ejb,keycloak-7f5f7bd8c6-7s2br) [Context=clientSessions] ISPN100010: Finished rebalance with members [keycloak-7f5f7bd8c6-7s2br, keycloak-7f5f7bd8c6-dbfxh], topology id 11
We can see the successful operation made by Infinispan to communicate between instances. In the log we found the name of our current pod keycloak-7f5f7bd8c6-7s2br
and the name of the new one created through the scale
command keycloak-7f5f7bd8c6-dbfxh
. If we scale it back to 1
instance, new logs will be available:
$ kubectl logs keycloak-7f5f7bd8c6-7s2br
20:10:28,787 INFO [org.infinispan.CLUSTER] (thread-34,ejb,keycloak-7f5f7bd8c6-7s2br) ISPN100001: Node keycloak-7f5f7bd8c6-dbfxh left the cluster
20:10:28,790 INFO [org.infinispan.CLUSTER] (thread-34,ejb,keycloak-7f5f7bd8c6-7s2br) ISPN000094: Received new cluster view for channel ejb: [keycloak-7f5f7bd8c6-7s2br|4] (1) [keycloak-7f5f7bd8c6-7s2br]
20:10:28,791 INFO [org.infinispan.CLUSTER] (thread-34,ejb,keycloak-7f5f7bd8c6-7s2br) ISPN100001: Node keycloak-7f5f7bd8c6-dbfxh left the cluster
And Voila!
Conclusion
This ends this 3-part article on Keycloak, Distroless and Kubernetes. You are now able to deploy a rock-solid, less vulnerable and scalable instance of Keycloak in your own cluster 🚀.
I hope you enjoyed it as mush as I enjoyed writing this article and share this experience about Keycloak configuration. You can find all the sample files from this article in this GitLab repository: davinkevin/keycloak-distroless.
Top comments (0)