DEV Community

Antonio Lagrotteria for AWS Community Builders

Posted on • Edited on • Originally published at levelup.gitconnected.com

A server-less CI/CD approach for mono-repo micro-frontends

This article provides a CI/CD pipeline approach for a GitHub mono-repo-based micro-frontend architecture in AWS, leveraging a series of AWS server-less services such as AWS CodePipeline, CodeBuild and CodeDeploy.

Micro-frontends come with an increase of the complexity of managing the infrastructure, which makes it crucial for organizations to carefully invest their time in supporting continuous integration (CI), continuous deployment (CD) pipelines and automatic tools that scale along with the organization.

A CI/CD for building mono repo micro-frontends

The proposed approach provides a scalable option for organizations to scale their tech micro-frontend ecosystem, keep teams autonomous and let them focus on business and early feedback with a fast release cycle.

Some context: Mono and Poly repos

Before diving in the architecture, let’s mention the main approaches for structuring a micro-frontend module.

  • Mono repos: all the teams work on a single repository

  • Poly (multi)-repos: each domain specific micro-frontend is located in its own repository and owned by a single team.

This PoC will focus on a hands-on, detailed and pragmatic CI/CD setup based on a mono-repo with main branch as source of the pipelines changes. For a great overview and detailed comparison I recommend an upcoming book from Luca Mezzalira.

Architecture

The architecture is based on a scenario where a company is implementing micro-frontends on a mono GitHub repository and wishes to implement a server-less CI/CD pipeline in AWS.

For this PoC, the mono-repo contains two trivial Angular micro-frontends, mfe-accounts and mfe-payments, though this model allows you to write each module independently with any framework of choice:

Mono-repo structure

The scenario involves some steps:

  • Developers push their code to the main branch in GitHub, which will trigger a push event via GitHub webhooks to an AWS API Gateway responsible to handle the event.

  • The API Gateway triggers an AWS Lambda function which authenticates the request, analyses the event and, based on affected files, triggers a pipeline for the micro-frontend where the files belong to.

  • One or more pipelines start building, testing and deployment actions via AWS CodePipeline, CodeBuild and CodeDeploy.

  • Changes are available in S3 bucket and exposed on a Cloudfront distribution.

Lets deep dive in the setup.

API gateway setup

GitHub allows to setup integrations towards its events, such as repository pushes, via webhooks, which POST a GitHub event data payload towards an endpoint. Let’s expose a RESTful API in AWS via API Gateway containing a single POST endpoints as shown below:

Create API Gateway

In above screen, we use a Lambda integration proxy because our associated “Hello World” Lambda will need to access the API Gateway request headers in order to authenticate GitHub upcoming requests. With the API in place, let’s create the webhook.

GitHub Webhook setup

Once the API Gateway has been deployed in a stage, it is time to create a GitHub webhook, (refer to this intuitive guide). Important: provide a secret to make sure that our Lambda can accept requests only coming from the webhook (more on this later).

Create a GitHub webhook

The integration is ready! Pushing a file to the repository will result in a “Hello world” message triggered by the Lambda, which can be seen under its stream logs in Cloudwatch. Let’s now update Lambda code to make sure that we support CI/CD for any micro-frontends in your repo.

Micro-frontend strategy on Lambda setup

The ultimate goal of the Lambda function is that given a GitHub push event containing repo commits, then it triggers one or more pipelines for any affected micro-frontend. Let’s look in details at below gist:

  • First, Lambda authenticates requests only coming from the GitHub webhook by validating the SHA256 request header via a HMAC256 security check, using the crypto module. This check is based on the secret defined in the GitHub webhook earlier and also stored in AWS Secret Manager (follow this tutorial). Lambda will access the secret in a secure way via the IAM action secretsmanager:GetSecretValue

  • Once validated, the payload is used to infer which micro-frontends have been affected, by extracting its name from the added/modified and removed files from the commits list.

  • If any micro-frontend has changed, we trigger a new build in AWS CodePipeline which will build and deploy the micro-frontend. For simplicity the pipeline is called the same as the micro-frontend which is going to build. The AWS SDK client library requires an IAM role allowing codepipeline:StartPipelineExecution action to access the resource

Overall Lambda code can be seen here. Now it is time to create the pipeline itself, which is part of the next section.

Code Pipeline setup

The creation of a Code Pipeline in AWS involves many steps and concepts, so I will try to keep it simple. Code Pipeline helps to automate release pipelines for fast and reliable application and infrastructure updates. Each steps, here called stages, perform actions on the involved build artifacts. I will look into how:

  • Create the pipeline by choosing some settings

  • Add a Source stage, answering the question “from where does the code to build come from?”

  • Add a Build stage: “how do I build the source code you just provided?”

  • Add a Deploy stage: “how and where do I deploy the build artifact you just provided?”

The main idea will be to isolate each CodePipeline and CodeBuild project to give flexibility and ownership to each team to manage that process within the team. Optimizations such as reuse of pipelines for similar projects and CloudFormation template are out of scope.

Create pipeline by choosing settings

First, create a pipeline by providing a name matching the building micro-frontend and default settings, as shown below.

First step of creating a pipeline.

Source Stage

This stage links the source code to be processed with Code Pipeline. We will connect CodePipeline with our GitHub repo by clicking the Connect button and initiating a wizard.

Connect Pipeline with GitHub

At the end of the wizard, as shown below, you will be able:

  • to access your GitHub repo (aladevlearning/microfrontends-pipeline in my case)

  • to select the *main *branch

  • to unselect the Start pipeline on source code changes, as we want to have it handled by the Lambda function

  • and click Next to proceed to the next stage.

Above steps can be seen in below gif.

Add source stage in CodePipeline

Build Stage

This stage is responsible to build the source received in previous stage. It will create or reuse an existing *CodeBuild *project, which instructs the pipeline on:

  • how to run the build, via a builspec.yml file.

  • where CodeBuild will practically make the build

In regards to buildspec.yml we can decide whether having a common file for all micro-frontends or having one for each of them. This highly depends on whether all micro-frontends adhere to same framework of choice and same build / test steps. By keeping them separate you give each team independence on how to build it, at the cost of slightly more complex overview on build process and governance. For our micro-frontend, the file looks like below:


A buildspec.yaml consists of intuitive phases for installing/prepping the environment, building the code and instructing how to structure the artifacts. After getting the micro-frontend name from the pipeline initiator, the file installs dependencies and zips the final artifact, which will be deployed to S3 bucket specified in the Deploy section. See this guide for more details.

Finally, the building process requires an environment (aka: machine) where to run.

Create a Code Build project

Above gif shows how CodeBuild configuration is based on a build environment, *which represents a combination of operating system, programming language runtime, and tools that are used to run a build. We also selected a specific *buildspec.yml file location as each micro-frontend could differ in terms of build process and pipeline (e.g. one could be an Angular project and another be a React one, or both still using same framework but being built with different steps).

Once the Code build project is setup, we set the build provider to be Code Build, we select the newly created Code Build project (mfe-accounts-build) and continue to the final stage.

Create Build stage

Build project created, let’s move to the final stage of CodePipeline.

Deploy Stage

Finally, we want to deploy our artifact on S3. In order to do that, we need to create S3 buckets where the artifact will be independently deployed. As CodeBuild will zip the built artifact, we check the Extract file before deploy *settings and set the *Canned ACL to public-read, given we want to be able to see the deployed artifacts in S3.

Create Deploy stage

That’s great! Your code is now deployed in S3 which allows to associate it with a CloudFront distribution for better deployment. Above steps for the CodePipeline should be repeated for any micro-frontend. This level of redundancy will allow each team to be independent and autonomous, tweak their CI/CD to

Result

Upon push to repository affecting the micro-frontend containing the change, one or more code pipelines will start.

Pipelines coming alive

Successful pipelines will look like this:

Successful pipeline

The deployed artifact is located under the specified S3 bucket:

S3 bucket for deployed micro-frontend

Associated with a Cloudfront distribution, our deployed micro-frontend will look as this:

Full code can be found here.

Summary and ideas

This article went in depth to provide a 10 minutes setup to make a seamless CI/CD pipeline for a mono-repo based frontend architecture. This should be seen as a workable, though initial approach which can be explored in so many ways, proving yet again how great and creative is to build things in AWS. Some ideas could be:

  • Have different pipelines to cover different needs, such as different framework, test suite, integration and functional testing, multi stage environment, etc…

  • publish artifacts in different AWS Accounts, one for test and one for production, to keep isolation and security in place.

  • Extend approach for feature branch CI/CD, where you could created branch deployments for early prototyping and feedback, without blocking the main branch.

  • Make the above as a CloudFormation template. This is a must to elevate this approach and consistently repeat it for any micro-frontend.

  • CloudFront invalidation step. If interested in adding CloudFront in the pipeline, a cache invalidation step via a Lambda function may be necessary to make sure all latest changes are correctly propagated to the consumers (or maybe AWS will take it as feedback and expose it as this seems a pretty used pattern).

References

Complete CI/CD with AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline | Amazon Web…
Building Micro-Frontends: the book

Top comments (0)