DEV Community

Cover image for Collect Aurora audit logs in Firehose
1 1 1

Collect Aurora audit logs in Firehose

In our last post, we enabled audit logs using parameter groups in Aurora Postgres.

Now, we are collecting our required Aurora logs in CloudWatch but we need to filter our those logs and send to S3 to archive for analysis and long term storage.

Why this is useful?
We can set retention on CloudWatch logs and keep our audit logs in S3. This will help to save cost. In a different use case, we can send to another external destination also for audit or analysis.

At this point, I am assuming you already have your application logs in CloudWatch. For our use case, I am collecting Aurora logs in CloudWatch as explained in part of this series. Although below use case should work for any logs in CloudWatch

In order to send logs to S3 from CloudWatch, we will create subscription filter which can help to stream log data in near realtime to destinations.

What is Subscription Filter in CloudWatch?
CloudWatch subscription filter provide filter patterns and options to deliver logs events to AWS services. It can provide log delivery of events to multiple destinations.

CloudWatch provide multiple service options to create subscription filter.

  • OpenSearch
  • Kinesis
  • Data Firehose
  • Lambda

Image description

We will go with Firehose considering log volume and cost, and deployment for Firehose is comparatively easier for our goal to steam logs to S3.
Firehose can transform records or convert format before delivery to the S3

To begin with, we need to follow these steps.

  1. Create S3 bucket
  2. Create Firehose Stream
  3. Create IAM role for Firehose
  4. Create CloudWatch subscription filter
  5. Validation

The reason we need to follow this approach as we need S3 bucket when creating Firehose stream. For CloudWatch subscription, we need to have Firehose stream first.

Step1: Create S3 bucket
Creating S3 bucket is straightforward. You need to search S3 service and create S3 bucket with default settings.

Step2: Create Firehose Stream

Image description

You can keep the option to create IAM roles by itself.

It can take a few minutes for Firehose Stream to get created and will show active status.

Image description

Note: We can't change destination for Firehose after creating stream.

Step3: Create IAM role to allow CloudWatch logs -> Firehose

Create IAM policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPutToFirehose",
      "Effect": "Allow",
      "Action": [
        "firehose:PutRecord",
        "firehose:PutRecordBatch"
      ],
      "Resource": "arn:aws:firehose:<region>:<account-id>:deliverystream/<your-firehose-name>"
    }
  ]
}

Enter fullscreen mode Exit fullscreen mode

Create IAM role LogsToFirehose

Update Trust Policy as

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "logs.<region>.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Enter fullscreen mode Exit fullscreen mode

Step4: Create CloudWatch Subscription Filter
Now, switch back to our log group in CloudWatch.

Image description

Click on Create Amazon Data Firehose subscription filter

Image description

Now, after adding filter name, we need to select Firehose stream in current account that we created in Step2.

We can also add pattern if we want to filter our logs further before sending to Firehose and add prefix.

Image description

Also, assign IAM role to grant permission to receive logs by Firehose from CloudWatch. We created this IAM role in Step3. Now, Create Subscription button.

We should see subscription filter for our logs added like this.

Image description

Step5. Validate logs
After creating subscription filter, we need to check Firehose Stream monitoring metrics, if it shows data getting collected

Image description

From our metrics, we can say, we are collecting logs.

Now, we need to go to S3 our final destination to confirm if we are getting those in bucket.

Image description

We should see logs organized in bucket with year, month and day.

Image description
That concludes our goal.

Top comments (0)

Best Practices for Running  Container WordPress on AWS (ECS, EFS, RDS, ELB) using CDK cover image

Best Practices for Running Container WordPress on AWS (ECS, EFS, RDS, ELB) using CDK

This post discusses the process of migrating a growing WordPress eShop business to AWS using AWS CDK for an easily scalable, high availability architecture. The detailed structure encompasses several pillars: Compute, Storage, Database, Cache, CDN, DNS, Security, and Backup.

Read full post

👋 Kindness is contagious

If this article connected with you, consider tapping ❤️ or leaving a brief comment to share your thoughts!

Okay