DEV Community

Cover image for Share Objects from Private S3 Buckets using CloudFront
Neel
Neel

Posted on • Edited on

Share Objects from Private S3 Buckets using CloudFront

For those who have wandered the path already, No, it is not just pre-signed URLs

You are a budding developer who has just started learning the concepts of AWS. After you have tinkered with those EC2 instances and levied an unaffordable amount on your AWS invoice, you need a break. So you resort to the most friendly AWS service of all, the Simple Storage Service (S3). You open the console, you create a bucket with a unique name, you keep the bucket private, you add the bucket policies (if you are savvy enough) and then you save and start uploading a bunch of screenshots to see if everything has gone as per plan.

If you have a bucket, then it is known that the bucket is going to be a private one. Exposing the bucket to the public is a task meant for rugged souls, not us developers. If you have a private bucket, then sharing the objects from the bucket is going to be a bit challenging. You can't just copy the object's URL and share it around. Opening it will show an XML error response stating that you are not allowed to access the resource. After Googling for a bit, you find out that there is this thing called a "pre-signed URL" which does the trick for you and you directly take that route. I have been there and I have done that, and this is a story of how I found out that pre-signed URLs are not a one-stop solution for every use case

Presigned URL

Presigned URLs are the simplest and a secure way to share objects from a private S3 bucket. These URLs are secured and short-lived which makes them perfect for sharing individual objects with desired parties. The process of generating a pre-signed download URL is straightforward and the screenshot below shows how it can be done from the AWS console, CLI, and using the SDK

image

The resultant URL will be in the following format

https://s3.{region}.amazonaws.com/{bucket name}/{object key}?
    X-Amz-Algorithm=AWS4-HMAC-SHA256&
    X-Amz-Credential={credentials generated by AWS}&
    X-Amz-Date={UTC datetime}&
    X-Amz-Expires={expiry in seconds}&
    X-Amz-Signature={unique signature}
Enter fullscreen mode Exit fullscreen mode

There are a few good things about this URL,

  • Its validity is limited. If you set the expiration value to 3600 seconds then the URL will be invalid after an hour and the object will become inaccessible after that

  • The URL is read-only. Once the pre-signed URL is generated, none of its parameters are editable. No one can try to edit the object key or the expiration value from the URL to do any funny stuff. Modifying any part of the URL will make it invalid

  • The URL is linked to the user generating it and inherits the policies assigned to the user. By controlling the policy, the usage of the URL can be restricted

All the above points make a pre-signed S3 URL a viable option for many use cases. However, the same advantages might turn out to be drawbacks in certain scenarios. For instance,

  • The pre-signed URLs cannot have an expiration limit of more than 7 days. So if your use case involves making it accessible for more than 7 days, then it will not be possible

  • The pre-signed URLs generated by a user are bound to their credentials. This means that when the credential becomes invalid, the URL becomes invalid too. This will make more sense when you use a lambda function to generate these URLs

What was my use case?

Here is my story of trying out a pre-signed URL and moving away from it, as it did not suit all my needs.

The AWS lambda function I was building was supposed to collect a bunch of PDF documents stored in an S3 bucket, generate an HTML email with some details, and include the download links for all the documents in the email sent to the recipients. Simple, init it?

Failed Approaches

Using a Lambda function with an execution role

The first pothole I hit was with the validity of these URLs generated using the lambda function. You can assign an execution role to a lambda function and it will be able to access only the resources that you have defined for the role's policy. This ensures that the lambda functions will not be allowed to perform any unintended activities. A sample IAM policy defined for an execution role will look something like this πŸ‘‡

image

The above policy ensures that the function can only get, put, and delete objects from the selected S3 bucket. The execution role is not bound to any real IAM user, so to perform any action, the lambda creates temporary credentials to access the resources. The generated temporary credentials are bound to a short-lived session and the default session duration is 1 Hour

image

No matter what the maximum session duration is, when used outside the scope of the lambda function such as to generate a pre-signed URL, the session cannot be controlled. For instance, if the temporary credentials used to generate the URL expires, then in affects the validity of the URL too, thus making the option of generating a pre-signed URL using lambda function non-viable

Using a Lambda function with IAM credentials

After discovering this limitation, I knew that there was another way to do it with lambda functions. This is not new or innovative in any way, because when you are generating pre-signed URLs using AWS CLI, then you would notice that the URLs will stay valid for a week (provided you set the expiry to a week). This makes sense because we "mostly" configure the access key and secret of an actual IAM user to use the CLI. So the URLs generated using those credentials will be valid for up to 7 days

🚨⚠️ Approach with Caution ⚠️🚨

  • The same can be done using lambda functions by creating a new IAM user (using CDK/Cloudformation is recommended)

  • Storing the access key & secret as a new secret value in the AWS secrets manager

  • Creating a new S3 client by using a StaticCredentialsProvider that uses the access key and secret fetched from the secrets manager

This will do the trick but this is not without its drawbacks. First of all, it is not a good idea to use statically configured credentials to access AWS resources via lambda functions and the IAM user could go out of control if the access is shared for local access or other purposes. If you are working alongside a good Cybersecurity team then they might go berserk hearing about this approach

So this method is definitely out the window with no second thoughts πŸ€·β€β™‚οΈ

The one that worked!

Using CloudFront signed URL

The final solution that perfectly suited my requirements came in the form of CloudFront signed URLs

In Amazon's own words, "Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds"

I started using CloudFront to deliver my static website hosted in an S3 bucket to users located in different regions of the world. It caches the resources at the nearby edge servers thus providing low latency access to the users, no matter where they are located

This means that you can serve any content from your S3 bucket using CloudFront and custom settings can control the access level of the resources.

The process is not simple but it is approachable. Let's see how it is done

Creating a new CloudFront distribution

The first step is to create a new CF distribution. As it is a CDN that works across different regions, the service is not bound to any single region (such as us-east-1 or ap-south-1). It is available globally. Search for the "CloudFront" service and click on "Create a CloudFront distribution" to set up a new distribution

image

The above is the creation form and the purpose of each field is as follows,

Fields Purpose
Origin domain The source of your content and this input field will display all the available S3 buckets from your account. You can select the bucket from where you wish to serve the content
Origin Access The access boundary for the S3 bucket

Public - Provided the bucket has public access, this option will allow anyone to access the objects from the bucket using the generated CloudFront URL (e.g. https://custom.cloudfront.com/image.png)

Origin access control settings - This is the recommended access level and we will use this to create a new control setting that enables only signed URLs to access the content
Restrict viewer access This will control the viewing access of the content from the bucket

No - Anyone can download/view the content using the custom CloudFront URL (e.g. https://custom.cloudfront.com/image.png)

Yes - The objects will be accessible only using a signed URL (or a signed cookie). This is the recommended setting

The above three are the notable fields. You can follow the input selection from the above screenshot to create a bare distribution. Remember that we will come back to the "Origin Access" and "Restrict viewer access" options to make the entire setup secured

Once you are done with the creation, you will land on the distribution page where you will see the custom domain setup for the new distribution. It can be used to access the content from the S3 bucket, but not yet

Create a Key Pair

Now is the time to create a new key pair using openssl. We will generate a private and public key from the terminal and configure the public key to CF. If you are using a Mac, then openssl can be installed using brew. It will be available on some of the Linux distros, I believe. If not, then it can be installed using the official package manager. For Windows, if you have Git bash installed then it comes bundled with that

Run the following commands to generate a private and public key pair

# Generates the private key for the server
~ openssl genrsa -out private_key.pem 2048

# To view the private key
~ cat private_key.pem

--------BEGIN PRIVATE KEY-----
RANDOMRANDOMRANDOMRANDOM
...
--------END PRIVATE KEY-----

# To extract the public key from the private key
~ openssl rsa -in private_key.pem -pubout -out public_key.pem

# To view the public key
~ cat public_key.pem

--------BEGIN PUBLIC KEY-----
RANDOMRANDOMRANDOMRANDOM
...
--------END PUBLIC KEY-----
Enter fullscreen mode Exit fullscreen mode

Storing the Private Key as a Secret

Open the generated private key file using a text editor or in the terminal and copy the entire content of the file. Open Secrets Manager from the AWS console and click on "Store a new secret" to create a new secret with the private key value.

  • Choose "Other type of secret" as the secret type and paste the copied private key into the "Key/value pairs" field as a Plaintext

  • In the next section, give a valid secret name (e.g clodufront/privatekey) and leave the rest of the fields to default, and save the secret

image

#CLI command to create a new secret from the private key file
~ aws secretsmanager create-secret \
    --name "clodufront/privatekey" \
    --secret-string file://private_key.pem
Enter fullscreen mode Exit fullscreen mode

Create a new CloudFront Public Key and Key Group

From the CloudFormation console, open the "Public Keys" section from the sidebar. Click on "Create public key" and paste the entire content copied from the public_key.pem file and save it

# Create a CloudFront public key using the CLI
aws cloudfront create-public-key \
    --public-key-config \
    "{\"CallerReference\":\"cli-1\",\"Name\":\"testkey\",\"EncodedKey\":\"-----BEGIN PUBLIC KEY-----\nMIIBI<....>QAB\n-----END PUBLIC KEY-----\n\"}"
Enter fullscreen mode Exit fullscreen mode

After creating the new public key, open the "Key Groups" section from the sidebar. Create a new key group by selecting the newly added public key from the dropdown

image

aws cloudfront create-key-group \
    --key-group-config \
    '{"Name":"NewKeyGroup","Items":["K271M92XK8HX1O"]}'
Enter fullscreen mode Exit fullscreen mode

Origin Access Control and Viewer Access Restriction

Remember the two fields from the table above which we discussed to revisit later? Well, it is time to do that

Let us start with creating a new Origin Access Control setting. Open the new distribution you have created and navigate to the "Origins" tab from the console. Select the only origin from the list and click "Edit".

  • On the edit page, change the "Origin access" selection from "Public" to "Origin access control settings"

  • On selecting this option, you will see a new action button to "Create control setting". Click this to open the popup that will let you create a new setting

image

  • Give the setting a name and select the signing behaviour to "Sign requests". This will make CloudFront create a new signature for every HTTP request that comes in and this signature will be forwarded to S3 to get the actual content. If the signature validation fails, then the content will not be served

  • Set the "Origin type" to "S3" and save the setting

  • After saving the control setting, you will see an option to update the S3 bucket policy. This is to allow the CloudFront distribution to fetch the content from the bucket for which it needs the "s3:GetObject" action to be configured. Click the "Copy policy" button to copy the exact policy that needs to be configured and click on the "Go to S3 bucket permissions" to update the bucket policy with the copied content

    image

  • After saving the bucket policy, come back to the "Edit origin" page, leave the rest of the fields as it is, and save the changes

We are not done yet. There is one final bit that completes the entire circle. From the distributions page, navigate to the "Behaviors" tab. Select the default behaviour from the list and click "Edit"

  • On the edit behaviour page, scroll down to find the "Restrict viewer action" input and change the option to "Yes"

  • This will show you an option to select the "Trusted authorization type" and from the inputs, select "Trusted key groups"

  • From the key group dropdown, select the new group that you have set up a while ago

  • image

    Once the group is selected, leave the other fields as it is and save the changes

Moment of truth

If you followed along so far and applied all the changes then you are close to the ultimate goal. We will see how we can generate a signed URL using Python and Java Lambda functions

Setup the following as the environment variables for the lambda functions

Variable What is it?
CLOUDFRONT_DOMAIN The domain of the created distribution
CLOUDFRONT_PUBLIC_KEY_ID The unique ID of the public key. This can be found under the "Public Keys" section in the CloudFront sidebar

e.g. K1X0*XXXXXXXXXX*
SECRET_ID The name of the secret that holds the private key.

In this article, I have set it as "clodufront/privatekey"

Python Lambda Function

Java Lambda Function

The Ultimate "Signed URL"

Both these functions will return the signed URL as a string that can be used to access a resource from the S3 bucket. The following are a few highlights about this URL,

  • It is not bound to the Lambda's execution role, so it is not dependent on the temporary session

  • In the above examples, the expiry is set to 7 days but it can be extended beyond that unlike the S3 pre-signed URL

  • The signed URL depends on the Key Pair which can be rotated easily whenever needed

Conclusion

It took me a couple of days to find this solution, so I wanted to share the same here. Hope this article helped a few.

As always, Happy Hacking!β˜•

Top comments (0)