DEV Community

Navapon
Navapon

Posted on

How to make AWS Infrastructure As Click to Code with AmazonQ

We can't be denied due to some use cases, like the poc phase or exploring how it works and what I could configure. After we understand and make things work, we would like to manage it as code properly for readability, trackability, and maintainability.

This blog post will show you what AWS Services are supported to help you click and export from Console to Code.

There are 2 Services I would like to talk about

Console-to-Code

The Console-To-Code, powered by Amazon Q Developer, Thas has announcement that as of October 10, 2024, it is generally available for EC2, VPC, and RDS.

Let's see how it works.

Get start

First, you need to go to the AWS Console. In this case, I will give an example of Using the RDS service.

  • Open the RDS Console. Click on the right-top corner, and Click Start Recording

Console to Code

  • I will proceed to create an AuroraSQL Serverless instance. Let's see what the result of that

Image description

The console-to-code is recording my actual actions. The result turns into the aws-cli command. Also, cloud formation and AWSCDK are powered by AmazonQ. But there is no Terraform. :( No problem; we will convert that thing to Terraform. Let's see how it is doing. Below is the cli command I got from there.

aws rds create-db-cluster --engine "aurora-postgresql" --engine-version "15.4" --engine-lifecycle-support "open-source-rds-extended-support-disabled" --engine-mode "provisioned" --db-cluster-identifier "database-1" --vpc-security-group-ids "sg-xxxxxxxxx" --port "5432" --db-cluster-parameter-group-name "default.aurora-postgresql15" --database-name "rds_aurora_console_to_code" --master-username "postgres" --preferred-backup-window 'null' --preferred-maintenance-window 'null' --backup-retention-period "7" --kms-key-id 'null' --db-subnet-group-name "default-vpc-xxxxxxxxxxx" --availability-zones 'null' --enable-cloudwatch-logs-exports "postgresql" --pre-signed-url "" --backtrack-window 'null' --scaling-configuration 'null' --domain 'null' --domain-iam-role-name 'null' --allocated-storage 'null' --iops 'null' --option-group-name 'null' --storage-throughput 'null' --storage-type "aurora" --db-cluster-instance-class 'null' --network-type 'null' --serverless-v2-scaling-configuration '{"MinCapacity":0.5,"MaxCapacity":1}' --performance-insights-kmskey-id 'null' --performance-insights-retention-period "465" --monitoring-interval "0" --database-insights-mode "advanced" 
Enter fullscreen mode Exit fullscreen mode

Sadly but understandable, we could not use the result generated by AmazonQ yet to make it work. What I have done is the following

  • remove all keys with a null value
  • added missing parameters --enable-performance-insight
  • added missing parameters --manage-master-user-password

Here is the worked version that has been modified

aws rds create-db-cluster \
  --engine "aurora-postgresql" \
  --engine-version "15.4" \
  --engine-lifecycle-support "open-source-rds-extended-support-disabled" \
  --engine-mode "provisioned" \
  --db-cluster-identifier "rds-aurora-console-to-code" \
  --vpc-security-group-ids "sg-xxxxxxxx" \
  --port "5432" \
  --db-cluster-parameter-group-name "default.aurora-postgresql15" \
  --database-name "rds_aurora_console_to_code" \
  --master-username "postgres" \
  --manage-master-user-password \
  --backup-retention-period "7" \
  --db-subnet-group-name "default-vpc-xxxxxxxx" \
  --enable-cloudwatch-logs-exports "postgresql" \
  --pre-signed-url "" \
  --storage-type "aurora" \
  --serverless-v2-scaling-configuration '{"MinCapacity":0.5,"MaxCapacity":1}' \
  --enable-performance-insights \
  --performance-insights-retention-period "465" \
  --monitoring-interval "0" \
  --database-insights-mode "advanced"
Enter fullscreen mode Exit fullscreen mode

Let's make it work with Terraform. I copied the above command asking my AmazonQ.

qconvertterraform

The Q responded with very long instructions and told me what I had to do; for a demo, I just needed to make it quick, so I prompted AmazonQ

Please create what you told me via bash scripts below what I got

#!/bin/bash
# setup_aurora.sh

# Create directory for the project
mkdir -p aurora_terraform
cd aurora_terraform

# Create main.tf
cat > main.tf << 'EOF'
provider "aws" {
  region  = var.aws_region
  version = "5.80"
}

resource "aws_rds_cluster" "aurora_cluster" {
  cluster_identifier        = var.cluster_identifier
  engine                   = "aurora-postgresql"
  engine_version           = "15.4"
  engine_mode              = "provisioned"
  database_name            = var.database_name
  master_username          = var.master_username
  manage_master_user_password = true

  vpc_security_group_ids   = [var.security_group_id]
  db_subnet_group_name     = var.db_subnet_group_name
  port                     = var.port

  db_cluster_parameter_group_name = var.db_cluster_parameter_group_name
  backup_retention_period  = var.backup_retention_period

  storage_type = "aurora"

  serverlessv2_scaling_configuration {
    min_capacity = 0.5
    max_capacity = 1.0
  }

  enable_cloudwatch_logs_exports = ["postgresql"]

  enable_performance_insights = true
  performance_insights_retention_period = 465

  monitoring_interval = 0
}

resource "aws_rds_cluster_instance" "aurora_instance" {
  cluster_identifier        = aws_rds_cluster.aurora_cluster.id
  instance_class           = "db.serverless"
  engine                   = aws_rds_cluster.aurora_cluster.engine
  engine_version           = aws_rds_cluster.aurora_cluster.engine_version
  performance_insights_enabled = true
}
EOF

# Create variables.tf
cat > variables.tf << 'EOF'
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-west-2"
}

variable "cluster_identifier" {
  description = "The identifier for the RDS cluster"
  type        = string
  default     = "rds-aurora-console-to-code"
}

variable "database_name" {
  description = "The name of the database to create"
  type        = string
  default     = "rds_aurora_console_to_code"
}

variable "master_username" {
  description = "Username for the master DB user"
  type        = string
  default     = "postgres"
}

variable "security_group_id" {
  description = "VPC Security Group ID"
  type        = string
  default     = "sg-09dfbfb45a551f09a"
}

variable "db_subnet_group_name" {
  description = "Name of DB subnet group"
  type        = string
  default     = "default-vpc-08b0f21ff827d6b88"
}

variable "port" {
  description = "The port on which the DB accepts connections"
  type        = number
  default     = 5432
}

variable "db_cluster_parameter_group_name" {
  description = "Name of the DB cluster parameter group"
  type        = string
  default     = "default.aurora-postgresql15"
}

variable "backup_retention_period" {
  description = "The days to retain backups for"
  type        = number
  default     = 7
}
EOF

# Create outputs.tf
cat > outputs.tf << 'EOF'
output "cluster_endpoint" {
  description = "The cluster endpoint"
  value       = aws_rds_cluster.aurora_cluster.endpoint
}

output "cluster_reader_endpoint" {
  description = "The cluster reader endpoint"
  value       = aws_rds_cluster.aurora_cluster.reader_endpoint
}

output "cluster_identifier" {
  description = "The RDS cluster identifier"
  value       = aws_rds_cluster.aurora_cluster.cluster_identifier
}

output "database_name" {
  description = "The database name"
  value       = aws_rds_cluster.aurora_cluster.database_name
}

output "port" {
  description = "The database port"
  value       = aws_rds_cluster.aurora_cluster.port
}

output "master_username" {
  description = "The master username for the database"
  value       = aws_rds_cluster.aurora_cluster.master_username
}
EOF

# Create terraform.tfvars (optional - for custom values)
cat > terraform.tfvars << 'EOF'
aws_region = "us-west-2"
cluster_identifier = "rds-aurora-console-to-code"
database_name = "rds_aurora_console_to_code"
master_username = "postgres"
security_group_id = "sg-09dfbfb45a551f09a"
db_subnet_group_name = "default-vpc-08b0f21ff827d6b88"
port = 5432
db_cluster_parameter_group_name = "default.aurora-postgresql15"
backup_retention_period = 7
EOF

# Make the script executable
chmod +x setup_aurora.sh

# Initialize and apply Terraform (uncomment these lines if you want automatic execution)
# terraform init
# terraform plan
# terraform apply -auto-approve

echo "Terraform configuration files have been created successfully!"
echo "To deploy the infrastructure:"
echo "1. Review and modify terraform.tfvars if needed"
echo "2. Run: terraform init"
echo "3. Run: terraform plan"
echo "4. Run: terraform apply"
Enter fullscreen mode Exit fullscreen mode

I have followed the created scripts and run them, and the result that I got and have to fix is

  • wrong regions
  • missing provider version
  • some parameters do not exist in terraform checked at document or using VSCODE Terraform Plugins will show the error on line

After that fixed things, run terraform init, terraform plan, terraform apply it working !!

We have three instances with three different approaches to operate with it. I will talk about the other service later. I don't need to blog for too long.

terraform-apply

All resources that are generated are available here

Things to Know and Consider
Here are a few things you should consider while using AWS Console-to-Code:

  • Anyone can use AWS Console-to-Code to generate AWS CLI commands for their infrastructure workflows. The code generation feature for AWS CDK and CloudFormation formats has a free quota of 25 generations per month, after which you will need an Amazon Q Developer subscription.
  • You should test and verify the generated IaC code code before deployment.
  • At GA, AWS Console-to-Code only records actions in Amazon EC2, Amazon VPC, and Amazon RDS consoles. The Recorded actions table in AWS Console-to-Code only displays actions taken during the current session within the specific browser tab. It does not retain actions from previous sessions or other tabs. Note that refreshing the browser tab will result in losing all recorded actions.

Conclusion

As you can see, the embedded service of console-to-code helps us quickly get an idea and convert it to code for what you are good at. Even if it does not copy and paste 100%, there is much stuff that can help us reduce time and crafting based on the result generated to ensure it is ready for production grade and can work with your organization. Unfortunately, this thing is relatively new and only supports a few services. Let's see what AWS will implement in the future.

Reference

Top comments (0)