After coming across the February Linode hackathon, I started experimenting with Linode. However, I had a hard time figuring out how to upload images from my Next.js app to a Linode storage bucket. This was probably because I had never worked with storage buckets before. But, after some heavy googling and experimentation, I figured it out and decided to create this article to share how I did it.
Below, I will guide you through the process of using pre-signed URLs to upload images to a Linode bucket using the AWS-SDK JavaScript client library. To select images, I’ll create a simple form with an input element that accepts images.
Prerequisites
Before you begin, you should have:
A good grasp of React concepts.
A basic Next.js application: I’m assuming you have a Next.js application already. If you don’t, follow the instructions from the official getting started guide to create one.
Node.js and npm installed: If you don't have them installed, you can download them by following these tutorials for Windows and Linux.
A code editor. I'm using VS Code.
Setting Up Linode
First things first, sign up for a Linode account following these instructions:
Go to the Linode website and click the "Sign Up" button in the top right corner of the homepage to sign up.
Follow the prompts to verify your account.
Choose your preferred payment method, either credit card or PayPal, and enter the relevant details. If you are a first-time user, you will be given a $100 credit (It’s been enough for me to experiment so far).
Agree to Linode's terms of service and privacy policy by clicking on the checkboxes.
Click on the "Create Account" button to complete the process.
Verify your email then log in to Linode.
Creating a Linode Object Storage Bucket
The object storage bucket is where you’ll store the images. You could store them in a database, but your database performance would end up being slow since media files take up a lot of space. Storing the images in a storage bucket and then saving the file metadata like its filenames and file paths in the database is better for performance.
Create a Linode storage bucket by following these steps:
Log in to the Linode Cloud Manager at cloud.linode.com.
In the left-hand sidebar, click on "Object Storage" under the "Storage" section.
Click the "Create a Bucket" button in the top right corner.
Enter a name for your bucket and select the region where you want it to be stored.
Click the "Create Bucket" button to create your new storage bucket.
Navigate to your new bucket and on the Access tab, select “Public-read” from the access list dropdown to allow everyone including your application to read the objects in the bucket. This will be useful when you start serving the images on your site.
Once you’ve created the bucket, create an access key that you’ll use to interact with the bucket by clicking the “Create access key” button on your object storage dashboard.
On the modal that opens, copy the access key id and the secret access key and save them in a .env file in your application.
In the next section, we’ll create a form that accepts images from users.
Accepting Images From Users
In the pages directory, create a new folder called upload/page.js
Then, in this file create a form containing an input element of type file that allows users to select an image from their file system.
import { useState } from "react";
export default function Upload() {
const [file, setfile] = useState("");
const handleUpload = (e) => {
e.preventDefault();
// get signed url
// use signed url to upload image
};
const handleChange = (e) => {
const file = e.target.files[0];
setfile(file);
};
return (
<form onSubmit={handleUpload} className={styles["upload-form"]}>
<input type="file" onChange={handleChange} />
<button type="submit">Upload</button>
</form>
);
}
The form calls a function called handleUpload
when a user submits it. This function should retrieve a signed URL from an API route we will create in the step and use it to upload the image to the storage bucket.
Generate a Pre-Signed URL
A pre-signed URL is a URL that provides temporary access to a specific object in a storage bucket. It allows the users of an application to access the object without credentials which helps maintain the security of your storage bucket.
Follow the steps below to retrieve a pre-signed URL to upload images to Linode.
-
Install the
@aws-sdk/s3-request-presigner
and@aws-sdk/client-s3
libraries using npm by running the following command:
npm install @aws-sdk/s3-request-presigner @aws-sdk/client-s3
You will also need the
uuid
package to generate unique file names for the uploaded images so install it too.
npm install uuid
-
Create an S3 client object using your Linode credentials.
const { S3Client } = require("@aws-sdk/client-s3"); const s3Client = new S3Client({ endpoint: 'your_bucket_endpoint', region: 'your_cluster_region', credentials: { accessKeyId: 'your_access_key_id', secretAccessKey: 'your_secret_access_key', }, });
Replace the placeholders above with your storage bucket details.
-
In the handler function, create a PutObjectCommand. This command represents a request to store an object in the bucket.
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3"); export default async function handler(req, res) { const filepath = `${uuidv4()}-${req.body.filename}`; try { const putObjectCommand = new PutObjectCommand({ Bucket: req.body.bucketname, Key: filepath, ACL: "public-read", // This allows anyone to access the uploaded image }); } catch (error) { return res.json({ error: error.message }); } }
Replace the placeholders above with your bucket name. Note that the Key property is assigned a unique image path which we are creating by pre-pending a unique ID from the uuid package to the file name. This is crucial to avoid accidental data loss because if two or more images have the same name, they will overwrite each other.
-
Once you have created the S3 client object and the
PutObjectCommand
, you can use the getSignedUrl function from the@aws-sdk/s3-request-presigner
library to get a pre-signed URL for the command. This function generates a URL that you can use to upload your object to your Linode bucket. Here's how you can use it:
const { getSignedUrl } = require("@aws-sdk/s3-request-presigner"); export default async function handler(req, res) { try { const filepath = `${uuidv4()}-${req.body.filename}`; const putObjectCommand = new PutObjectCommand({ Bucket: req.body.bucketname, Key: filepath, ACL: "public-read", // This allows anyone to access the uploaded image }); const signedUrl = await getSignedUrl(s3Client, putObjectCommand, { expiresIn: 60, // Expires in 1 minutes }); return res.json({ signedUrl: signedUrl, filepath, }); } catch (error) { return res.json({ error: error.message }); } }
The
getSignedUrl
function takes these arguments:
* The S3 client object.
* The PutObjectCommand
* An options object that specifies the expiration time for the URL to 1 minute (60 seconds).
Once you have the pre-signed URL, you can return it to the client side together with the path to the file.
Using the Signed URL to Upload Images
In the upload page, modify the handleUpload()
function to retrieve the signed URL by making a fetch request to the /api/getSignedUrl
endpoint.
const handleUpload = async (e) => {
e.preventDefault();
// get signed url
if (file) {
// getsignedurl
const response = await fetch("/api/getSignedUrl", {
method: "post",
body: {
bucketname: process.env.BUCKET_NAME,
filename: file.name,
},
});
const data = await response.json();
const signedUrl = data.signedUrl;
const filepath = data.filepath; // store filepath in database
}
// use signed url to upload image
};
The response object from /api/getSignedUrl
also contains the file path to the image. You can save this file path to the database and later retrieve it when you serve the images on your site.
After getting the signed URL, we can use it to upload the image to Linode via a put request.
const handleUpload = async (e) => {
e.preventDefault();
// get signed url
if (file) {
// getsignedurl
const response = await fetch("/api/getSignedUrl", {
method: "post",
body: {
bucketname: process.env.BUCKET_NAME,
filename: file.name,
},
});
const data = await response.json();
const signedUrl = data.signedUrl;
const filepath = data.filepath; // store filepath in database
// Use signed url to upload image
const uploaded = await fetch(`${signedUrl}`, {
method: "put",
body: {
file: file,
},
headers: {
"Content-Type": file.type,
"x-amz-acl": "public-read",
},
});
if (uploaded.status == 200) {
// Successfully uploaded image
} else {
// Could not upload
}
}
};
Here, the headers option specifies the content type and allows public access to the image. This way, when you use the image file path on your site, you won’t get an access denied error.
Next Steps
This article showed you how to use pre-signed URLs to upload images to an object storage bucket using an AWS S3 client library. The next step would be to store the image file paths returned every time you create a signed URL to a table in your database. For example, if the images you are storing are profile pictures, you can store the file paths to the accounts table associated with the user that is currently authenticated.
Hope this was helpful! Thanks for reading.
Top comments (0)