In this article, we will see how to bind a service to a micro-service, using the odo development tool.
We will first see how the Service Binding Operator can be used to bind to a PostgreSQL instance started from a standalone Deployment, then to a PostgreSQL instance started from an Operator.
Next, we will see how to start the Operator backed PostgreSQL service from odo, and how to bind our micro-service to this PostgreSQL service using odo.
The Service Binding Operator
The Service Binding Operator is an operator that helps the developer get the values of the different parameters exposed by a service.
For example, when you have a PostgreSQL service deployed, you need to know its host, database name, user and password to work with the service. If the service provides which information are to be exposed, the Service Binding Operator is able to get the exposed values and provide them to your micro-service.
Installing the Service Binding Operator
You can install this operator with the help of the Operator Lifecycle Manager. For this, first install, if necessary, this manager with the command:
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.18.3/install.sh | bash -s v0.18.3
Then, install the Service Binding Operator with the command:
$ kubectl create -f https://operatorhub.io/install/service-binding-operator.yaml
Deploying a standalone PostgreSQL instance
Let's deploy a single PostgreSQL instance using the official postgres
image, available at https://hub.docker.com/_/postgres.
We can deploy an instance by defining our own database name, user and password with the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_PASSWORD
value: a-super-secret
- name: POSTGRES_USER
value: user1
- name: POSTGRES_DB
value: db1
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pgdata
volumes:
- name: pgdata
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-svc
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
When the service is running, you can run another container that will be ready to connect to it using psql
, with the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: postgres
command: ["bash", "-c", "sleep $((10**10))"]
env:
- name: PGPASSWORD
value: a-super-secret
- name: PGHOST
value: postgres-svc
- name: PGDATABASE
value: db1
- name: PGUSER
value: user1
Note that we have defined the environment variables supported by psql
to define the host, database name, username and password.
You finally can connect to the database with the command:
kubectl exec -it deployment/client psql
The environment variables defined into the manifest for the container will be used by psql
to connect to the desired service.
But with this method, the developer needs to know the values for these different parameters, when running the client. Let's see how these variables can be automatically injected into the pod, using the Service Binding Operator.
Annotating the service resource
The first step is to annotate the postgres
Deployment, to indicate which parameters should be exposed:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
service.binding/pguser: path={.spec.template.spec.containers[0].env[?(@.name=="POSTGRES_USER")].value}
service.binding/pgpassword: path={.spec.template.spec.containers[0].env[?(@.name=="POSTGRES_PASSWORD")].value}
service.binding/pgdatabase: path={.spec.template.spec.containers[0].env[?(@.name=="POSTGRES_DB")].value}
labels:
app: postgres
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_PASSWORD
value: a-super-secret
- name: POSTGRES_USER
value: user1
- name: POSTGRES_DB
value: db1
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pgdata
volumes:
- name: pgdata
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-svc
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
Linking to the annotated service
Then, you can create a client2
deployment, which does not define the database name, user and password:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client2
spec:
selector:
matchLabels:
app: client2
template:
metadata:
labels:
app: client2
spec:
containers:
- name: client2
image: postgres
command: ["bash", "-c", "sleep $((10**10))"]
env:
- name: PGHOST
value: postgres-svc
Finally, you can create a servicebinding
resource, to define a link between the Postgres service and the client2 deployment:
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
name: binding-request
spec:
bindAsFiles: false
namingStrategy: '{{ .name | upper }}'
application:
name: client2
group: apps
version: v1
resource: deployments
services:
- group: apps
version: v1
kind: Deployment
name: postgres
By applying this manifest, the client2
pod will be restarted, and environment variables will be injected through a secretRef
, by the Service Binding Operator:
$ kubectl get deployment client2 -o yaml
[...]
spec:
containers:
- command:
- bash
- -c
- sleep $((10**10))
env:
- name: PGHOST
value: postgres-svc
envFrom:
- secretRef:
name: binding-request-703eda68
$ kubectl get secret binding-request-703eda68 -o yaml
apiVersion: v1
kind: Secret
metadata:
name: binding-request-703eda68
data:
PGDATABASE: ZGIx
PGPASSWORD: YS1zdXBlci1zZWNyZXQ=
PGUSER: dXNlcjE=
type: Opaque
Now, you can connect to the service from the client2
deployment, without the need to know the credentials.
Deploying an Operator backed PostgreSQL service
Operator backed services are services managed by a Kubernetes operator.
You can find such Operator backed services on https://operatorhub.io/. For this example, we won't use operatorhub, but deploy manually the operator found at https://github.com/operator-backing-service-samples/postgresql-operator.
To deploy the PostgreSQL operator using the OLM framework, we first need to declare the catalog containing the Operator, then subscribe to this specific Operator:
kubectl apply -f - << EOD
--------
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: sample-db-operators
namespace: olm
spec:
sourceType: grpc
image: quay.io/redhat-developer/sample-db-operators-olm:v1
displayName: Sample DB Operators
--------
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: pg
namespace: operators
spec:
channel: stable
name: db-operators
source: sample-db-operators
sourceNamespace: olm
installPlanApproval: Automatic
EOD
Examining the annotations on the Database custom resource
The PostgreSQL operator comes with a custom resource definition (CRD): Database
. Let's examine the definition of this resource with the command:
$ kubectl get crd databases.postgresql.baiju.dev -o yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.postgresql.baiju.dev
annotations:
service.binding/db.host: path={.status.dbConfigMap},objectType=ConfigMap
service.binding/db.name: path={.status.dbConfigMap},objectType=ConfigMap
service.binding/db.password: path={.status.dbConfigMap},objectType=ConfigMap
service.binding/db.port: path={.status.dbConfigMap},objectType=ConfigMap
service.binding/db.user: path={.status.dbConfigMap},objectType=ConfigMap
service.binding/dbConnectionIP: path={.status.dbConnectionIP}
service.binding/dbConnectionPort: path={.status.dbConnectionPort}
service.binding/dbName: path={.spec.dbName}
service.binding/password: path={.status.dbCredentials},objectType=Secret
service.binding/user: path={.status.dbCredentials},objectType=Secret
[...]
We can find that the CRD get a list of service.binding
annotations, that will be used by the Service Binding Operator to bind instances of this resource to micro-services.
Note that, earlier in this article, annotations were placed in the Deployment
instance itself; annotations are now placed in the definition of the Database
resource (if you are an object-oriented developer, think about Classes and Instances in place of Definition and Instance).
Creating a Database instance with odo
The odo
tool can create instances of Operator backend services.
You can get the list of available Operators on your cluster with the command:
$ odo catalog list services
Services available through Operators
NAME CRDs
postgresql-operator.v0.0.9 Database
service-binding-operator.v0.9.1 ServiceBinding, ServiceBinding
Note that this command displays the list of
ClusterServiceVersions
resources contained in the current namespace in theSucceeded
phase. TheSucceeded
phase can take a few minutes to appear, please be patient; you can check with the command:
$ kubectl get csv
NAME DISPLAY VERSION REPLACES PHASE
postgresql-operator.v0.0.9 PostgreSQL Database 0.0.9 Succeeded
service-binding-operator.v0.9.1 Service Binding Operator 0.9.1 service-binding-operator.v0.9.0 Succeeded
With odo, services are attached to components; before to deploy a service, you need to create a component. You can follow this article to create your component, and get an example of sources in the repository https://github.com/feloy/nest-odo-example.
Now, you can create an instance of a PostgreSQL database with:
$ odo service create postgresql-operator.v0.0.9/Database db1
Successfully added service to the configuration; do 'odo push' to create service on the cluster
This will create an instance with the default parameters. If you want to use a specific configuration, you can get the possible parameters with the command:
$ odo catalog describe service postgresql-operator.v0.0.9/Database
Kind: Database
Version: v1alpha1
Description: Describes how an application component is built and deployed.
Parameters:
PATH DISPLAYNAME DESCRIPTION
image PostgreSQL database image PostgreSQL database image
imageName PostgreSQL database image name PostgreSQL database image name
dbName DB name Desired database name. If
not provided, default value
'postgres' will be used.
You can now create an instance with a specific database name with the command:
$ odo service create postgresql-operator.v0.0.9/Database db1 -p dbName=mydb -p imageName=postgresql -p image=postgres
Finally, you can deploy the service into the Kubernetes cluster:
odo push
Binding the service with odo
This odo command will create the Service Binding resource necessary to bind the PostgreSQL service to your micro-service:
$ odo link Database/db1 --name dblink
✓ Successfully created link between component "nodejs-prj1-api-hjtd" and service "Database/db1"
$ odo push
$ kubectl get sbr dblink -o yaml
kind: ServiceBinding
metadata:
name: dblink
[...]
spec:
application:
group: apps
name: nodejs-prj1-api-hjtd-app
resource: deployments
version: v1
bindAsFiles: false
detectBindingResources: true
services:
- group: postgresql.baiju.dev
kind: Database
name: db1
version: v1alpha1
[...]
You can examine that the environment variables have been defined into your micro-service:
$ odo exec -- bash -c export | grep DATABASE_
declare -x DATABASE_CLUSTERIP="10.107.65.179"
declare -x DATABASE_DBCONNECTIONIP="10.107.65.179"
declare -x DATABASE_DBCONNECTIONPORT="5432"
declare -x DATABASE_DBNAME="mydb"
declare -x DATABASE_PASSWORD="password"
declare -x DATABASE_USER="postgres"
The developer can now use these variables into the code of the micro-service, and does not need to declare the values for these variables when deploying the micro-service, as the injection of the environment variables will be managed by the Service Binding Operator.
Top comments (0)