So, this month we landed a new client for whom we must keep track of distributor's and doctor's orders and sales. The client had a requirement to keep the S3 bucket private and only allow access to the files using pre-signed URLs. So in this blog, I will show you how to use pre-signed URLs to upload and download files from an AWS S3 bucket while keeping the bucket private.
Table of Contents
- Prerequisites
- Let's break down the task into smaller steps.
- Use Cases
- Bonus Code Snippet
- Additional Resource
- Conclusion
Prerequisites
- Basic knowledge of JavaScript
- Basic knowledge of AWS S3 bucket
- Basic knowledge of HTTP requests
- Basic knowledge of Node.js and Express.js
Let's break down the task into smaller steps.
- Setting up the backend
- Develop a function to generate an AWS S3 pre-signed URL
- Configuring AWS S3 bucket
- Connecting function to an API endpoint
- Setting up the frontend
- Connecting frontend to the API
Step 1: Setting up the backend
mkdir backend
cd backend
npm init -y
npm install express aws-sdk
touch index.js
You can use type nul > index.js
for Windows users to create a new file.
// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')
app.listen(3000, () => {
console.log('Server is running on port 3000')
})
Step 2: Develop a function to generate an AWS S3 pre-signed URL
// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID, // Your AWS Access Key ID
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, // Your AWS Secret Access Key
region: process.env.AWS_REGION // Your AWS region
signatureVersion: 'v4', // This is the default value
})
const awsS3GeneratePresignedUrl = async (
path,
operation = 'putObject', // Default value is putObject, for get use getObject
expires = 60
): Promise<string> => {
const params = {
Bucket: bucketName, // Bucket name
Key: path, // File name you want to save as in S3
Expires: expires, // 60 seconds is the default value, change if you want
}
const uploadURL = await s3.getSignedUrlPromise(operation, params)
return uploadURL
}
app.listen(3000, () => {
console.log('Server is running on port 3000')
})
Step 3: Configuring AWS S3 bucket
If we try to connect API to our function, we will get a CROS (Cross-Origin Resource Sharing) error. To fix this, we need to configure our S3 bucket to allow access from our API. We want access to PUT and GET requests from our API. To do this, we need to add a CORS configuration to our S3 bucket in the following way:
- Open the Amazon S3 console at https://console.aws.amazon.com/s3/
- Search for the bucket you want to configure and click on it
- Click on the
Permissions
tab - Scroll down to the
Cross-origin resource sharing (CORS)
section - Click on
Edit
and add the following policy
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
- Click on
Save changes
- Also make sure that Block public access (bucket settings) is turned on to keep your bucket private (optional)
Change the AllowedOrigins
to your API URL to make it more secure. Or you can also use wildcard *
to allow access from any origin.
Step 4: Connecting function to an API endpoint
We will have 2 endpoints, one to generate a pre-signed URL for uploading a file and the other to generate a pre-signed URL for downloading a file.
// index.js
const express = require('express')
const app = express()
const AWS = require('aws-sdk')
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID, // Your AWS Access Key ID
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, // Your AWS Secret Access Key
region: process.env.AWS_REGION // Your AWS region
signatureVersion: 'v4', // This is the default value
})
const awsS3GeneratePresignedUrl = async (
path,
operation = 'putObject', // Default value is putObject, for get use getObject
expires = 60
): Promise<string> => {
const params = {
Bucket: bucketName, // Bucket name
Key: path, // File name you want to save as in S3
Expires: expires, // 60 seconds is the default value, change if you want
}
const uploadURL = await s3.getSignedUrlPromise(operation, params)
return uploadURL
}
app.get('/generate-presigned-url', async (req, res) => {
const { path } = req.query
const uploadURL = await awsS3GeneratePresignedUrl(path, 'putObject', 60)
res.send({ path, uploadURL })
})
app.get('/download-presigned-url', async (req, res) => {
const { path } = req.query
const downloadURL = await awsS3GeneratePresignedUrl(path, 'getObject', 60)
res.send({ downloadURL })
})
app.listen(3000, () => {
console.log('Server is running on port 3000')
})
Step 5: Setting up the frontend
For the front end, I am using React.js. You can use any front-end framework of your choice. and install axios
to make HTTP requests.
npx create-react-app frontend
cd frontend
npm install axios
Step 6: Connecting frontend to the API
// App.js
import React, { useState } from 'react'
import axios from 'axios'
export default function App() {
const [uploadURL, setUploadURL] = useState('')
const [downloadURL, setDownloadURL] = useState('')
const generatePresignedURL = async (path, type) => {
const response = await axios.get(
`http://localhost:3000/generate-presigned-url?path=${path}`,
)
if (type === 'upload') {
setUploadURL(response.data.uploadURL)
} else {
setDownloadURL(response.data.downloadURL)
}
}
return (
<div>
<input type="file" onChange={(e) => uploadFile(e.target.files[0])} />
<button onClick={() => generatePresignedURL('file1.txt', 'upload')}>
Generate Upload URL
</button>
<button onClick={() => generatePresignedURL('file1.txt', 'download')}>
Generate Download URL
</button>
{downloadURL && (
<a href={downloadURL} download>
Download file
</a>
)}
</div>
)
}
Use Cases
- Upload files to the S3 bucket from your front end without exposing your AWS credentials.
- Download files from the S3 bucket to your front end without exposing your AWS credentials.
- Directly upload files to the S3 bucket from your front end without creating API to handle file uploads.
Bonus Code Snippet
// Code to get uploadUrl and put the file to the S3 bucket using fetch API.
const putFileToS3Api = async ({ uploadURL, file }) => {
try {
if (!file) throw new Error('No file provided')
const res = await fetch(signedUrl, {
method: 'PUT',
headers: {
'Content-Type': file.type ?? 'multipart/form-data',
},
body: file,
})
return res
} catch (error) {
console.error(error)
}
}
const getUploadUrlApi = async ({ filename }) => {
// Update the URL to your Graphql Endpoint.
const res = await axios.get('http://localhost:3000/generate-presigned-url', {
path: filename,
})
return res
try {
} catch (error) {
console.error(error)
}
}
export const uploadFileToS3Api = async ({ file }) => {
try {
if (!file) throw new Error('No file provided')
const generateUploadRes = await getUploadUrlApi({ filename: file.name })
if (!generateUploadRes.data.uploadURL)
throw new Error('Error generating pre signed URL')
const uploadRes = await putFileToS3Api({
uploadURL: generateUploadRes.data.uploadURL,
file,
})
if (!uploadRes.ok) throw new Error('Error uploading file to S3')
return {
message: 'File uploaded successfully',
uploadURL: generateUploadRes.data.uploadURL,
path: generateUploadRes.data.path,
}
} catch (error) {
console.error(error)
}
}
Call the uploadFileToS3Api
function with the file you want to upload to the S3 bucket in one go. Additionally, you can use await Promise.all
to upload multiple files at once.
const uploadFiles = async (files) => {
const uploadPromises = files.map((file) => uploadFileToS3Api({ file }))
const uploadResults = await Promise.all(uploadPromises)
console.log(uploadResults)
}
Additional Resource
More information about AWS S3 presigned URLs can be found here
Conclusion
That's it. You have successfully learned how to generate pre-signed URLs for uploading and downloading files from the S3 bucket while keeping the bucket private.
I hope you find this blog helpful. If you have any questions, feel free to ask in the comments below or contact me on Twitter @thesohailjafri
Top comments (15)
There are a number of security problems here.
Never user plaintext AWS credentials
Don't have public buckets
AWS is usually pretty clear about the risks of not having buckets configured to block all public access. It is far more secure to have your users upload the file to a server which then performs an
s3:PutObject
call.If someone is able to get your service to give them signed URLs for uploading contents, you may very quickly find harmful files uploaded to your bucket.
Your CORS settings invite SSRF and CSRF
Your
axios
error handling is insecureThe
axios
module includes all authorization headers in the error object it returns, so yourconsole.error()
will log sensitive information.Finally, you're using the old version of the AWS SDK
AWS-SDK v2 is being deprecated soon: aws.amazon.com/blogs/developer/ann...
The V3 of the SDK is pretty easy to use, and the nice thing about the change is that it's going to be similarly functional across all of the various libraries (e.g. Rust, Java, JavaScript), using similar patterns. Gone will be the days of language-specific AWS SDK patterns.
Damn, thank you, I really mean it. Thats a lot to take but I will study one by one on the points you mentioned and try to improve my practice in existing and upcoming projects 🤝🙌😬
During the production setup mostly the API routes stay under the auth middleware but I guess I can move the entire logic on the server which will take a single/multiple files and return the uploaded paths to keep it modular this I don't compose my bucket in any way
I very much appreciate you receiving that well. Security is hard, and security in the cloud is harder. There are a lot of tools but it's really hard to keep up with all of them.
If it helps, I never use IAM users. For humans, we should use federated authentication using something like Okta, or Auth0, and for infrastructure running code in AWS we should use execution roles or instance profiles.
Nobody can steal credentials which do not exist, or (in the case of AWS STS) are ephemeral and expire quickly.
Okay understood, I will try to practice using okta or Auth0 for future projects
Wholesome exchange right here.... Love this community 🙌
@manchicken
1) So how should I store them? How do you store them?
Fetch them at runtime and discard them when you no longer need them. Also, use instance profiles and STS when possible, avoiding having long-lived secrets in the first place.
In 2024, there is no good reason to rely on access tokens and user passwords in AWS.
Thnks. It is not my decision to use S3.
Nice share! 🙌
Thanks lee
I read multiple articles related to aws signed urls and I would say that this one pieces them togehter.
Good read!
Thanks brother, I will try to update the article with more details on security
Great article bother..... Helped me a lot
Best article ever i went through 💯