AWS S3, is an object based storage system where every file is saved as an object. It’s easy to use with a simple web service interface that facilitates the storage and retrieves any amount of data.
So before we proceed you need to signup and create a bucket in Amazon S3. You can easily do this by using the AWS Management Console. In this article I am assuming that you have already completed those prerequisites. If not you can quickly jump on to here: Setting up Amazon S3 wrap all that up and continue.
Okay. so let’s begin. Take an example where you have a user profile and you want that the user to upload his/her image as profile picture or profile avatar. You want to store the image to AWS S3.Also, everytime the user uploads a image the previous image should be deleted.
1. Let’s create a pretty straight route path /image/upload
which would serve POST request.
const express = require('express');
const router = express.Router();
const usersHelperObj = require('../helpers/usersHelper')
const { v4: uuidv4 } = require('uuid');
/* Upload image to S3. */
router.post('/image/upload', async (req, res, next) => {
const [payload] = [req.body];
// throw error on blank payload
if (Object.keys(payload).length === 0) {
return res.status(400).send({
error: {
message: 'Blank payload supplied.'
}
});
}
// throw error on wrong payload
if ((!payload.hasOwnProperty('image')) || (payload.image == '')) {
return res.status(400).send({
error: {
message: 'Image missing.'
}
});
}
if (payload) {
const user_id = uuidv4(); //generating a random user_id
const uploadedAvatarResponse = await usersHelperObj.uploadUserAvatar(user_id, payload);
// check if the response is correct
if (uploadedAvatarResponse.hasOwnProperty('id') && uploadedAvatarResponse.hasOwnProperty('location')) {
res.status(200).send(uploadedAvatarResponse);
}
else {
res.status(400).send({ error: uploadedAvatarResponse });
}
}
else {
return res.status(400).send({
error: {
message: "Bad Request."
}
});
}
});
module.exports = router;
2. Now let’s create a helper function in a separate file helpers/usersHelper.js
that would validate the payload content and call the actual service imageUploadService.
- Create a file
usersHelper.js
in the pathhelpers/usersHelper.js
. - Once you’re done try to create something like below:
const { v4: uuidv4 } = require('uuid');
const imageUploadServiceObj = require('../utils/imageUploadService')
exports.uploadUserAvatar = async (userId, payload) => {
try {
if (payload.hasOwnProperty("image")) {
const base64Image = payload.image;
const imageCategory = 'avatar';
const prevImage = uuidv4().replace(/[ -]/g, '');
const params = {
userId,
base64Image,
prevImage,
imageCategory
}
// creating an object for custom imageUploadService
let imageServiceObj = new imageUploadServiceObj.ImageService(params);
// checking if the string in the payload is in valid base64 format.
if (!imageServiceObj.isValidBase64()) {
return ({
message: 'Supplied image is not in base64 format.'
})
}
// checking if file size is more than a specified size.
else if (imageServiceObj.isGreaterThan(5)) { //5 MB
return ({
message: 'Supplied image is greater than 5 MB.'
})
}
// checking if the file is of valid type
else if (!imageServiceObj.isValidImageType()) {
return ({
message: 'Supplied image type is invalid.'
})
}
else {
const amazonResponse = await imageServiceObj.uploadToS3Bucket();
// if the response from aws is correct return the data
if (amazonResponse.hasOwnProperty('eTag') && amazonResponse.hasOwnProperty('location')) {
const fileLocation = `${amazonResponse.location}`
return ({
id: userId,
location: fileLocation
});
}
else {
// else return error with message
return ({
ref: 'UPLOAD_FAILED',
message: amazonResponse.message
})
}
}
}
else {
return (false);
}
}
catch (err) {
return {
ref: 'GENERAL_ERROR',
message: err.message
}
}
}
3. The final step is to create a service file that would do the actual work of uploading and deleting the image.
- Create a file
imageUploadService.js
in the pathutils/imageUploadService.js
. - Once you’re done you can follow the below code to create your custom service:
const AWS = require('aws-sdk');
const config = require('config');
const { v4: uuidv4 } = require('uuid');
exports.ImageService = class ImageService {
constructor(params) {
this.base64Image = params && params.base64Image ? params.base64Image : '';
this.userId = params && params.userId ? params.userId : '';
this.prevImage = params && params.prevImage ? params.prevImage : '';
this.imageCategory = params && params.imageCategory ? params.imageCategory : '';
}
allowedFileTypes = ['jpg', 'jpeg', 'png', 'tiff'] // ARRAY OF ALLOW IMAGE EXTENSIONS
/**
* FUNCTION TO CHECK IF THE STRING IS IN BASE64 FORMAT
* INFO: ADDITIONAL OPTION PARAMETERS TO PASS
{
allowMime: boolean value,
mimeRequired: boolean value,
paddingRequired: boolean value,
allowEmpty: boolean value,
}
* @param {String} base64String
* @param {Object} options
*/
isValidBase64(base64String = this.base64Image, options = { mimeRequired: true, allowEmpty: false }) {
if (base64String instanceof Boolean || typeof base64String === 'boolean') {
return false
}
if (!(options instanceof Object)) {
options = {}
}
if (options.allowEmpty === false && base64String === '') {
return false
}
var regex = '(?:[A-Za-z0-9+\\/]{4})*(?:[A-Za-z0-9+\\/]{2}==|[A-Za-z0-9+\/]{3}=)?'
var mimeRegex = '(data:\\w+\\/[a-zA-Z\\+\\-\\.]+;base64,)'
if (options.mimeRequired === true) {
regex = mimeRegex + regex
} else if (options.allowMime === true) {
regex = mimeRegex + '?' + regex
}
if (options.paddingRequired === false) {
regex = '(?:[A-Za-z0-9+\\/]{4})*(?:[A-Za-z0-9+\\/]{2}(==)?|[A-Za-z0-9+\\/]{3}=?)?'
}
return (new RegExp('^' + regex + '$', 'gi')).test(base64String)
}
/**
* FUNCTION TO CHECK THE TYPE OF THE IMAGE (FILE EXTENSION)
* @param {String} base64String
*/
isValidImageType(base64String = this.base64Image) {
const fileType = base64String.split(';')[0].split('/')[1];
return this.allowedFileTypes.includes(fileType.toLowerCase());
}
/**
* FUNCTION TO CHECK THE SIZE OF THE IMAGE FILE
* @param {Number} allowedSize
* @param {String} base64String
*/
isGreaterThan(allowedSize = 3, base64String = this.base64Image) { //Default size is set to 3 MB
let [stringLength, sizeInKB, sizeInMB] = [base64String.length, '', ''];
let imageSize = (stringLength * (3 / 4));
// checking if padding is present and appling the algorithm as required
// Ref: https://en.wikipedia.org/wiki/Base64#Padding
if (base64String.slice(-2) === '==') {
imageSize = imageSize - 2;
sizeInKB = imageSize / Math.pow(1024, 1);
sizeInMB = imageSize / Math.pow(1024, 2);
// console.log(sizeInMB);
}
else if (base64String.slice(-1) === '=') {
imageSize = imageSize - 2;
sizeInKB = imageSize / Math.pow(1024, 1);
sizeInMB = imageSize / Math.pow(1024, 2);
// console.log(sizeInMB);
}
else {
sizeInKB = imageSize / Math.pow(1024, 1);
sizeInMB = imageSize / Math.pow(1024, 2);
// console.log(sizeInMB);
}
if (sizeInMB > allowedSize) {
return true;
}
return false;
}
/**
* FUNCTION TO UPLOLOAD THE AVATAR IMAGE FILE TO AMAZON S3 BUCKET
* @param {String} base64Image
* @param {String} userId
*/
async uploadToS3Bucket(base64Image = this.base64Image, userId = this.userId, prevImage = this.prevImage, imageCategory = this.imageCategory) {
const { AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, FILE_UPLOAD_BUCKET, region } = config.get('aws');
//turning on the logger to print log entries in the console,
AWS.config.logger = console;
let s3;
// Configuring AWS with access and secret key.
if (AWS_ACCESS_KEY_ID && AWS_SECRET_ACCESS_KEY) {
AWS.config.update({ accessKeyId: AWS_ACCESS_KEY_ID, secretAccessKey: AWS_SECRET_ACCESS_KEY, region: region });
// Creating a s3 instance with credentials
s3 = new AWS.S3({
params: {
Bucket: FILE_UPLOAD_BUCKET
},
region: region,
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY
});
}
else {
AWS.config.update({ region: region });
// Creating a s3 instance with credentials
s3 = new AWS.S3({
params: {
Bucket: FILE_UPLOAD_BUCKET
},
region: region,
});
}
const type = base64Image.split(';')[0].split('/')[1];
const imageBuffer = new Buffer.from(base64Image.replace(/^data:image\/\w+;base64,/, ""), 'base64');
const filename = uuidv4().replace(/[ -]/g, '');
const params = {
Bucket: FILE_UPLOAD_BUCKET,
Key: `assets/images/${imageCategory}/${userId}/${filename}.${type}`, // the path, filename and type. (type is not required)
Body: imageBuffer,
// ACL: 'public-read', // granting public access to the sub resource object
ContentEncoding: 'base64', // required
ContentType: `image/${type}` // required (Notice the back ticks)
}
let amazonResponse = {};
try {
// delete previous image if prevImage exists
if(prevImage) {
const delResp = await s3.deleteObject({
Bucket: FILE_UPLOAD_BUCKET,
Key: `uploads/${imageCategory}/${userId}/${prevImage}`,
}, async (err, data) => {
if (err) {
console.log("Error: Object delete failed.");
}
else {
console.log("Success: Object delete successful.");
}
});
}
//uploading the object to the bucket
const { ETag, Location, Key, Bucket } = await s3.upload(params).promise();
amazonResponse = {
eTag: ETag,
location: Location,
key: Key,
bucket: Bucket
}
}
catch (error) {
console.log(error)
const { message, code, time, statusCode } = error
amazonResponse = {
message,
code,
time,
statusCode
}
}
return amazonResponse;
}
};
Our custom service does the following for us:
- Check if the file type is a valid one based on the base64 Data URI. Read more…
- Check the file size. Default is set to 3MB if no value is passed as parameter to the
isGreaterThan
function of the custom service. - Delete the previous image in the location provided.
- Upload the new image in the location provided. The parameters required to create a AWS S3 object are stored in the config file which can be updated with your own values. I have added comments throughout wherever I felt is required for better understanding. You can design the service your way and you feel something can be tweaked to make this better you’re always welcome to create a pull request.
I have created a Sample Express Application which does the work of uploading and deleting a S3 object here:
s3imageUpload
s3imageUpload is an example for creating a custom service to Upload and Delete Image in Amazon S3 Bucket using Node.js.
Installation
Use the package manager npm or yarn to install dependencies.
npm install
OR
yarn install
Usage
node ./bin/www
If you have nodemon installed:
nodemon ./bin/www
Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
License
All you have to do is clone the repository and:
$ npm install
OR
$ yarn install
Top comments (0)