What should our microservice application architectures be like?
Let's examine together how we use sidecar to solve cross-cutting concerns such as authorization, caching, configuration secret management, and observability.
Cross-Cutting Concerns
Let's start with explaining cross-cutting concerns.
Some of the requirements outside of the application's business code are needed in the different layers of an application.
For example logging, configuration, caching, authorization, observability…
Traditional Development
The traditional way is to code and use all this tooling within our development service. If we have different services written in the same language, we can share those codes between services and use them as packages/libraries between services.
So what is a bad scenario?
In an environment where services are written in more than one language, these requirements had to be reimplemented with each language.
Imagine that you (or your teams) are writing same functionality over and over with different languages...
Is there a solution in this case instead of writing the same code?
The first thing that comes to mind is to write separate API services for those requirements.
So is this a correct solution?
What problems can we encounter?
How to scale these shared services when the number of consumer services increase?
Moving the application code to another service that accessing over the network will cause the network latency(delay) to our application. It will slow down our application.
So, what would you do if you do not want to rewrite the same stories in different languages, or to face with network latencies?
Answer: We will use to Sidecar pattern.
Sidecar Pattern
So what is this Sidecar?
We can run multiple containers inside our pods running on Kubernetes. We can call the containers running beside our main application container as sidecars.
These additional features can develop in a single language and then we can "inject" it to our applications. We develop our sidecars with Go language and inject them into microservices.
This "inject" operation is performed by Kubernetes on the basis of a concept which name is Dynamic Admission Webhook. I will leave the source link at the end of the article for details.
When a pod scales up, the additional sidecars also go with it.
In addition, with the network interface created specifically for the pod shared across sidecars, network latency can be ignored because it is over localhost.
What else can be done with the sidecars?
- Pod network configurations
- Pod network requests/responses can be interrupted/manipulated
- Service mesh can be made
- Microservice runtime implementation can be done
For example, Istio Service Mesh can work as sidecar. The pod's entire network control is taken over by the incoming istio-proxy as a sidecar and can manipulate incoming and outgoing requests/responses.
You can write various network rules. When the whole network passes through the istio-proxy sidecar, there is metric data for all network processes and we can visualize microservice inter-service calls.
I want to complete my post here. We take a quick look into the sidecar approach with you. See you in the next posts.
Resources
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
https://betterprogramming.pub/understanding-kubernetes-multi-container-pod-patterns-577f74690aee
Let's Connect
You can support me on Github: Support mstrYoda on Github
Top comments (0)