DEV Community

Cover image for ☸️ Managed Kubernetes: Our Dev is on AWS, Our Prod is on OVHCloud
Benoit COUETIL 💫 for Zenika

Posted on • Edited on

☸️ Managed Kubernetes: Our Dev is on AWS, Our Prod is on OVHCloud

TL;DR: It works very well for us, with minimal initial investment and development overhead. We would have done it again if given the choice to reboot.

Initial thoughts

Kubernetes is an open-source container orchestration platform used to manage and automate the deployment and scaling of containerized applications. It has gained popularity in recent years due to its ability to provide a consistent experience across different cloud providers and on-premises environments.

Operating Kubernetes may seem scary. The uncertainty of Kubernetes on non-Big-Tech Cloud providers is scary to a higher level. So mixing Kubernetes providers using Amazon Web Services (AWS) and OVHcloud, is it calling for the Apocalypse to fall down on our heads ? Not as much as we imagined, there are surprisingly fewer impacts on development than you would guess.

What are the services needed for the project ?

Kubernetes of course, but also other managed services:

  • PostGreSQL database
  • S3 file service
  • Docker registry
  • Load balancers

Why OVHcloud ?

The application we were building had to be deployed in Europe, on a sovereign Cloud.

At the time of taking the decision, OVHcloud was the most mature European Cloud provider offering managed Kubernetes using Terraform, as far as we knew. And it is also the largest hosting provider in Europe, as of now.

Scaleway, which also provides managed Kubernetes using Terraform, had been tested in the early stage of the project. But these limitations (again, at that time) were blockers for our use cases:

  • Some side effect of Kubernetes management overlays (especially some problems using Traefik)
  • No Kubernetes on private network
  • Unsatisfying availability of docker registry

Any feedback on these or other European cloud providers are welcome 🤓

Why not OVHcloud all the way from dev to prod ?

In the past, we had performance issues on OVHcloud managed Kubernetes. We did not want to slow down the development team with infrastructure problems. The most important thing is to have an optimum development phase, where data governance is not an issue, since there is no client data on dev environments. The Cloud provider had to be adapted to the project need, not the other way around.

Why AWS for dev ?

Knowing the daily cost of a development team in comparison to development infrastructure, it had to be an optimum developers experience. It could have been GCP also. But my experience was deeper on AWS, and it is, as of now, my Cloud provider of choice, performance-wise for development team. We experience significant performance differences between AWS and OVHcloud. We chose not to bother for production performance, since the production load will soon be significant, but quite reasonable for now. And we still have to experiment with different OVHcloud VM sizes as Kubernetes nodes.

Now that the team is happy with our setup, here are the main differences.

1. Installation: Terraform on both sides but with different implementation 🤷‍♂️

Terraform can be used on AWS and OVHcloud for the whole stack to be deployed. But, as always, Terraform configuration is provider-related. If you are interested, our Terraform manifests have been shared in blog posts, for AWS, and for the OVHcloud equivalent.

To sum up the differences:

  • OVHcloud is dependent on OpenStack Terraform provider, so you have to configure access to both these providers
  • OVHcloud offers S3-compliant buckets, but uses AWS Terraform configuration for that, so you have to configure this provider also
  • OVHcloud needs only one Docker registry for multiple images, but on the other and you have a manual action to create a "private" space in the registry
  • AWS documentation is easier to find on internet, and largely discussed on Stackoverflow
  • AWS/EKS integrate seamlessly with the AWS/ECR Docker registry in the same project. For OVHcloud, you have to create secrets with credentials in the cluster

helm, orange and blue clouds, flying ship, steampunk

2. Impact on cluster-wide tooling: only a few differences 🤓

We use some in-cluster tooling and there is not much of a difference between AWS and OVHcloud Kubernetes on this side.

Ingress Controller

AWS/EKS and OVHcloud fully support NGINX Ingress Controller. Including auto-provisioning of load-balancer.

We decided to have a DNS for dev environments, managed by AWS, and a DNS for staging/prod environments, managed by OVHcloud. So we can opt out easily of one of these Cloud providers.

The differences reside in the TLS certificate handling. We use a single certificate with:

  • a termination on the load balancer for AWS (which does not seem to be possible with OVHcloud)
  • a termination on NGINX Ingress Controller for OVHcloud as a fallback, using

Metrics server

Metrics-server is installed by default on OVHcloud, and has to be installed manually on AWS/EKS cluster.

Autoscaling

Autoscaling is already provided on OVHcloud, but we don't use it for now. Autoscaler has to be manually installed on the AWS/EKS cluster.

The Elastic Stack

The Elastic Stack is our Swiss Army Knife on the cluster. The whole Elastic Stack has been installed in version 8.5.1 inside the cluster. We know it's not recommended to have stateful apps in Kubernetes, but we are in the early stages of our production.

We have installed:

  • Elasticsearch: the data and search engine
  • Kibana: the all-in-one UI
  • Metricbeat: the Kubernetes metrics collector (as far as we are concerned)
  • Filebeat: The containers' log collector
  • Logstash: The data transformation tool, used for the PostGres plugin allowing us to easily (and nicely) display functional KPIs

Most of them have been installed with no difference, except for ones detailed below.

Metricbeat

Metricbeat collects Kubernetes metrics. Some metrics are collected through the kubernetes API in a deployment, and others through the kubelet in a daemonset (one node on pod).

For the daemonset's pods to communicate with their respective nodes, by default they use the HOST_NAME variable. In AWS/EKS, it works like a charm. Sadly, on OVHcloud clusters, the hostname is not resolvable to an IP inside pods.

The workaround is to attach these daemonset pods to the host network to be able to use the HOSTNAME variable.

This works fine for an extra-cluster elasticsearch, which is the recommended architecture. But for an intra-cluster elasticsearch (with no ingress), we have to rely on an elasticsearch kubernetes service... which is not accessible from host network pods. We then created a elasticsearch nodeport service to complete the workaround architecture.

We will document this part in another article if anyone is interested.

Filebeat

Filebeat is our containers' log collector and dissector.

To have advanced capabilities, Filebeat has to be used in autodiscover mode. Either docker or kubernetes provider. Docker was an option before, but now, it is not the default container runtime system.

So only kubernetes provider remains. It relies on the kubernetes API to identify pods. Default configuration works well on AWS.

Since OVHcloud provides fewer kubernetes API power (I suspect that OVHcloud has fixed master nodes number and AWS auto-scales them), default configuration tends to flood it, resulting in very slow administration, even complete downtimes sometimes.

The workaround was to change some configuration options to lower the pressure. Now it is working like a charm 😊.

We will document this part in another article if anyone is interested.

Other tools: no noticeable difference

We installed these tools the exact same way on both providers:

helm, orange and blue clouds, flying ship, steampunk

3. Impact on application manifests: nearly nothing ✌️

Not much to say, this is the beauty of Kubernetes: manifests work the same way. We have dev/prod differences that are not related to Cloud providers, so we won't detail them here.

We still have to handle the certificates to handle on OVHcloud, since TLS termination is on NGINX Ingress Controller.

4. Impact on development: nearly nothing ✌️

On the development side, the team experienced some differences using S3 on AWS and OVHcloud. It was expected, this is not the default setup to target OVHcloud. But once the documentation is applied, and OVHcloud tests performed, usage is the same. We decided to encrypt files using a cloud-agnostic library, so we are not tied to a specific Cloud feature.

5. Cloud experience considerations

OVHcloud is still under development to match Big Tech services. We experience degraded experience on these aspects, compared to AWS:

  • Master nodes performance
  • Worker nodes performance
  • Official technical support
  • Non-official technical support (problems solving using internet search)

But this is expected when comparing investments on these products, and that is a price we are willing to pay, as long as it matches our constraints and desire to go to alternative Cloud providers.

Wrapping up

In conclusion, managed Kubernetes services offer a streamlined way of deploying and managing Kubernetes clusters. By abstracting away the complexities of Kubernetes, these services enable developers to focus on their applications rather than infrastructure management. With the ability to deploy the same application to multiple cloud providers using a declarative approach, teams can achieve greater consistency and efficiency in their deployment processes. So whether you're using Big Tech or not, OVHcloud can be your Cloud provider of choice if it fits your constraints.

Already having clusters ? you can optimize your cluster costs having a look at the article FinOps EKS: 10 tips to reduce the bill up to 90% on AWS managed Kubernetes clusters. This is designed for AWS/EKS clusters, but most points are generic and then apply to OVHCloud too.

Do you have any experience in multi-cloud Kubernetes ? Please share your insight, especially if you would have used other providers for the same requirements !

helm, orange and blue clouds, flying ship, steampunk

Illustrations generated locally by DiffusionBee using ToonYou model

Further reading

Top comments (2)

Collapse
 
lezou profile image
David M

It's been OVHcloud and not OVH for a few years now ;-)

Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Thanks, fixed everywhere, the magic of markdown blogging 😉