Level 300
In the previous post the bases for deployment ephemeral test environments were lunching following the steps and setup around AWS accounts and Azure DevOps, today the process continues with the construction of a CDK construct for the microservices deployment and respective pipeline.
The following image depicts the separation of layers and responsibilities for this scenario.
Key points:
Here the platform team creates and deploys the core stack and expose outputs for other layers and services through CloudFormation outputs.
The CI/CD platform and ALM platform are supported by azure DevOps.
The development team creates and deploys the services based on the constructs published in private repository.
The infrastructure definition and application are defined in typescript. This allows the simple management of pipelines, vulnerabilities and keep the full setup with the same language.
Hands on
Requirements
• CDK >= 2.94.0
• AWS CLI >= 2.7.0
• checkov >= 2.1.229
• node >= v18.17.1
• Visual Studio Code
• Docker >= 24.0.6
AWS Services
• Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
• AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). Select any OCI-compliant container image, define memory and compute resources, and run the container with serverless compute.
• Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.
• Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance.
• Elastic Load Balancing (ELB): automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs).
• AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
Step by Step
First step in creating the CDK construct using projen Documentation | projen. You can find the construct here:
velez94 / cdkv2_ephemeral_environment_services_construct
Construct for ephemeral environment series.
CDK Construct for custom ecs Fargate Service
This construct deploys an ECS service, creates a listener and exposes the services and imports the resources based on Clodformation outputs.
The file src/custom_service.ts:
import { StackProps, Fn, Duration } from 'aws-cdk-lib';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as ecsPatterns from 'aws-cdk-lib/aws-ecs-patterns';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as serviceDiscovery from 'aws-cdk-lib/aws-servicediscovery';
import { Construct } from 'constructs';
import { ImportResources } from './import_services';
export interface EcsCustomServiceProps extends StackProps {
readonly listenerPort: number;
readonly containerPort: number;
readonly public: boolean;
readonly parentEnv: string;
readonly instanceInputs: { [index: string]: any };
readonly environmentOutputs: { [index: string]: any };
}
export class Cdkv2EphemeralEnvironmentServices extends Construct {
/**
* @attribute
*/
public readonly importedResources?: ImportResources;
constructor(scope: Construct, id: string, props: EcsCustomServiceProps) {
super(scope, id);
const taskSize = setTaskSize();
function setTaskSize() {
if (props.instanceInputs.inputs.task_size === 'x-small') {
return {
cpu: 256,
memory: 512,
};
} else if (props.instanceInputs.inputs.task_size === 'small') {
return {
cpu: 512,
memory: 1024,
};
} else if (props.instanceInputs.inputs.task_size === 'medium') {
return {
cpu: 1024,
memory: 2048,
};
} else if (props.instanceInputs.inputs.task_size === 'large') {
return {
cpu: 2048,
memory: 4096,
};
} else if (props.instanceInputs.inputs.task_size === 'x-large') {
return {
cpu: 4096,
memory: 8192,
};
} else {
return {
cpu: 256,
memory: 512,
};
}
}
if (props.environmentOutputs.outputs.VPCId != 'Null') {
this.importedResources = new ImportResources(this, 'ImportResources',
{
vpcId: props.environmentOutputs.outputs.VPCId,
parentEnv: props.parentEnv,
environmentOutputs: props.environmentOutputs,
},
);
}
// Service Specifications
const envVarMap: { [index: string]: string } = {};
const nameSpace= serviceDiscovery.PrivateDnsNamespace.fromPrivateDnsNamespaceAttributes(this, 'NameSpace',
{
namespaceArn: Fn.importValue(`CloudMapNamespaceArn-${props.parentEnv}`),
namespaceId: Fn.importValue(`CloudMapNamespaceId-${props.parentEnv}`),
namespaceName: Fn.importValue(`CloudMapNamespaceName-${props.parentEnv}`),
});
const loadBalancedEcsService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster: this.importedResources?.importedCluster,
assignPublicIp: true,
loadBalancer: this.importedResources?.importedAlb,
memoryLimitMiB: taskSize.memory,
serviceName: props.instanceInputs.inputs.name,
listenerPort: props.instanceInputs.inputs.alb_port,
idleTimeout: Duration.seconds(60),
cpu: taskSize.cpu,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(props.instanceInputs.inputs.image),
enableLogging: true,
containerPort: props.instanceInputs.inputs.port,
//logDriver: ecs.LogDriver.awsLogs(),
environment: envVarMap,
},
desiredCount: props.instanceInputs.inputs.desired_count,
},
);
const health_path = props.instanceInputs.inputs.health_path ?? '/';
loadBalancedEcsService.targetGroup.configureHealthCheck({ path: health_path });
loadBalancedEcsService.taskDefinition.executionRole?.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AmazonECSTaskExecutionRolePolicy'),
);
const service = loadBalancedEcsService.service;
service.enableCloudMap({ name: props.instanceInputs.inputs.service_discovery_name, cloudMapNamespace: nameSpace });
}
}
And for import services: src/import_services.ts:
import { StackProps, Fn } from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as ssm from 'aws-cdk-lib/aws-ssm';
import * as alb from 'aws-cdk-lib/aws-elasticloadbalancingv2';
import { Construct } from 'constructs';
export interface ImportServiceProps extends StackProps {
readonly vpcId: string;
readonly parentEnv: string;
readonly environmentOutputs: { [index: string]: any };
}
export class ImportResources extends Construct {
/**
* @attribute
*/
readonly importedCluster: ecs.ICluster;
/**
* @attribute
*/
readonly importedAlb: alb.IApplicationLoadBalancer;
constructor(scope: Construct, id: string, props: ImportServiceProps) {
super(scope, id);
// import VPC
const vpcId = ssm.StringParameter.valueFromLookup(this, `${props.parentEnv}/VPCID` );
const importedVpc = ec2.Vpc.fromLookup(this, 'VPCImport', {
vpcId: vpcId,
});
// Import Cluster
this.importedCluster = ecs.Cluster.fromClusterAttributes(
this,
'ClusterImport',
{
clusterName: Fn.importValue( `ECSClusterName-${props.parentEnv}`), //environmentOutputs.outputs.ECSClusterName, //
vpc: importedVpc,
securityGroups: JSON.parse(
props.environmentOutputs.outputs.ECSClusterSecGrps,
),
},
);
// Import Load Balancer
this.importedAlb = alb.ApplicationLoadBalancer.fromApplicationLoadBalancerAttributes(this, 'ALB', {
vpc: importedVpc,
loadBalancerDnsName: Fn.importValue(`LBDNSName-${props.parentEnv}`),
loadBalancerArn: Fn.importValue(`ARNALB-${props.parentEnv}`),
securityGroupId: Fn.importValue(`SGALB-${props.parentEnv}`),
},
);
}
}
Now, the construct can be published into private repository for example, azure Artifacts or AWS CodeArtifacts.
For example, for azure Artifacts the .npmrc
file:
@labvel:registry=https://pkgs.dev.azure.com/DevSecOpsMyOrg/_packaging/PlarformEngieering/npm/registry/
registry=https://registry.npmjs.com
always-auth=true
Finally, a project is created to reuse the construct, suppose that a simple nodejs app, the project looks like:
tree -L 2
.
├── app
│ ├── Dockerfile
│ ├── app.js
│ ├── node_modules
│ ├── package-lock.json
│ ├── package.json
│ └── views
├── azure-pipelines.yml
└── infra
├── README.md
├── bin
├── cdk.context.json
├── cdk.json
├── cdk.out
├── diagram.dot
├── diagram.png
├── environment-properties.json
├── jest.config.js
├── lib
├── node_modules
├── package-lock.json
├── package.json
├── test
└── tsconfig.json
9 directories, 15 files
velez94 / demo_app_cdkv2_ephemeral_environment
Demo application repository for ephemeral environments blog series.
The project has two main folders, app
for define de application code, and another for infrastructure definitions called infra
.
The lib/cdkv2_ephemeral_environment_demo-stack.ts
imports the construct and define the service instances according to the environment properties in environment-propreties.json
:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import input from "../environment-properties.json";
import { Cdkv2EphemeralEnvironmentServices } from '@labvel/ecs-fargate-services';
export interface EphemeralEnvironmentProps extends StackProps {
onDemandTestEnv: boolean;
version: string
}
export class Cdkv2EphemeralEnvironmentDemoStack extends Stack {
constructor(scope: Construct, id: string, props?: EphemeralEnvironmentProps) {
super(scope, id, props);
// The code that defines your stack goes here
for (let s of input.service_instances){
s.inputs.image = `${s.inputs.image}:${props?.version}`
console.log(s.inputs.image)
new Cdkv2EphemeralEnvironmentServices(this, `CustomService-${s.name}`, {
listenerPort: s.inputs.alb_port,
containerPort: s.inputs.port,
public: true,
parentEnv:`${input.environment.env}-${input.environment.outputs.ParentEnv}`, //"parentEnv",
instanceInputs: {inputs: s.inputs},
environmentOutputs:{"outputs": input.environment.outputs }
})
}
// Create additional services for testing
if( props?.onDemandTestEnv) {
for (let s of input.additonal_test_service_instances){
new Cdkv2EphemeralEnvironmentServices(this, `TestService-${s.name}`, {
listenerPort: s.inputs.alb_port,
containerPort: s.inputs.port,
public: true,
parentEnv:`${input.environment.env}-${input.environment.outputs.ParentEnv}`, //"parentEnv",
instanceInputs: {inputs: s.inputs},
environmentOutputs:{"outputs": input.environment.outputs }
})
}
}
}
}
The environment-properties.json
:
{
"environment": {
"name": "cdk-ecs-cluster-env",
"env":"dev",
"outputs": {
"ParentEnv": "cdk-ecs-env-demo",
"ECSClusterSecGrps": "[]",
"ECSClusterSDNamespace": "ecs-cdk-demo.dev"
}
},
"service": {
"name": "cdk-svc-demo"
},
"service_instances": [
{
"name": "sharkapp",
"inputs": {
"alb_port": 8080,
"port": 80,
"health_path": "/",
"desired_count": 1,
"task_size": "x-small",
"image": "155794986228.dkr.ecr.us-east-2.amazonaws.com/complimentgeneralapi",
"load_balanced": true,
"load_balanced_public": true,
"service_discovery_name": "sharkapp"
}
}
],
"additonal_test_service_instances": [
{
"name": "backend",
"inputs": {
"alb_port": 80,
"port": 80,
"desired_count": 1,
"task_size": "x-small",
"image": "public.ecr.aws/nginx/nginx:mainline-alpine",
"load_balanced": true,
"load_balanced_public": true,
"service_discovery_name": "backend"
}
},
{
"name": "frontend",
"inputs": {
"alb_port": 3000,
"port": 3000,
"desired_count": 1,
"task_size": "x-small",
"image": "public.ecr.aws/aws-containers/ecsdemo-frontend:776fd50",
"load_balanced": true,
"load_balanced_public": true,
"service_discovery_name": "frontend"
}
}
]
}
The block service_instances
contains the service instances for deployment, this service is published in the central repository account, the additonal_test_service_instances
use some public images to simulate the additional services necessary for test the feature and keep the release environment isolated. The value for the image does not contain the tag version due it will pass as environment variable to the infra pipeline after run application pipeline. This is a good but also an antipattern, it is recommended avoid. Its recommended to use parameter store as bridge between layers (application and infrastructure).
Additionally, the azure-pipelines.yaml
file:
parameters:
- name: ServiceConnection
type: string
default: EphemeralEnvDeployment
- name: RegionName
type: string
default: us-east-2
- name: OnDemandTestEnv
type: boolean
default: false
- name: Project
type: string
default: ephemeral_env_dev
trigger:
branches:
include:
- main
- releases/*
- feature/*
exclude:
- releases/old*
- feature/*-working
resources:
repositories:
- repository: cdk_pipelines
type: git
name: DevSecOps/cdk_pipelines
variables:
- group: 'ephemeral_envrionments_demo'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: DevSecOpsEphemeralEnvironments
steps:
- task: AWSShellScript@1
name: AWSECRPublish
inputs:
awsCredentials: ${{ parameters.ServiceConnection }}
regionName: ${{ parameters.RegionName }}
disableAutoCwd: true
workingDirectory: 'app'
scriptType: 'inline'
inlineScript: |
echo Logging in to Amazon ECR... $AWS_ACCOUNT_ID $AWS_DEFAULT_REGION
pwd
ls -all
export TAG=$( git tag | tail -1)
echo $TAG
docker build -t complimentgeneralapi:$TAG .
docker images
echo complimentgeneralapi:$TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/complimentgeneralapi:$TAG
docker tag complimentgeneralapi:$TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/complimentgeneralapi:$TAG
docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-2.amazonaws.com/complimentgeneralapi:$TAG
# aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com &
env:
AWS_ACCOUNT_ID: $(shared_account)
AWS_DEFAULT_REGION: $(shared_account_region)
- task: Bash@3
name: ProduceVar
inputs:
targetType: 'inline'
script: |
TAG=$( git tag | tail -1)
echo "##vso[task.setvariable variable=VERSION;isOutput=true;]$TAG"
echo $TAG > version.txt
echo "The variable VERSION is: " $TAG
# Publish Package
- task: PublishPipelineArtifact@1
displayName: PublishArtifact
inputs:
targetPath: '$(Build.SourcesDirectory)'
artifactName: '${{ parameters.Project }}-Build'
- template: templates/ci_cd.yaml@cdk_pipelines # Template reference
parameters:
Project: ephemeral_env_dev
Language: typescript
Action: deploy
ServiceConnection: EphemeralEnvDeployment
PrivateRepository: true
AzureCustomFeed: a3dfeb18-d0fc-4033-99ae-50b8dxxxxxxb
CDKPath: infra
OnDemandTestEnv: true
Environment: dev
Checkout: none
ECSDeployment: yes
You must enable pipeline trigger based on pull request in azure devops.
The branch for this demo are
main
,releases/*
andfeatures/*
Here you can watch a main block for demo purposes that build and publish the image based on git tag for release. This task run in a custom agent that already have access to ECR repository in central account. The second half is the reference to the ci/cd template with the parameters to build and deploy ephemeral environment. You can find it here:
velez94 / cdkv2_pipelines_azure_devops
Repository with CDK pipelines definitios in yaml files for azure devops.
CDK Pipelines in azure DevOps
This repository contains de CDK pipelines definitions for integrating azure DevOps and AWS, based on simple authentication and multi account setup in AWS.
Architecture Diagram
Requirements
- Bootstrap accounts for CDK deployments.
- A Service Connection with right permissions.
How to
- Create a variable group with values for environments, for example:
az pipelines variable-group create --name cdk_pipelines_delivery --variables dev_account=123456789012 dev_region=us-east-2 --authorize true --description "Group for lab Pipelines Delivery" --project Delivery
- Create an azure pipeline file into cdk project to use these templates, for example:
# File: azure-pipelines-c#.yml
variables:
- group: 'ephemeral_envrionments_demo'
resources:
repositories:
- repository: cdk_pipelines
type: git
name: DevSecOps/cdk_pipelines
trigger:
- master
pool:
vmImage: ubuntu-latest
stages:
- template: templates/ci_cd.yaml@cdk_pipelines # Template reference
parameters:
Project: ephemeral_env_core
Language: typescript
Action: deploy
ServiceConnection: EphemeralEnvDeployment
Learn how in:
Thanks for reading and sharing! 😊
Let a comment if you want to watch the deployment or demo in deep. ✒️
Top comments (0)