As a new developer trying to climb your way through practice projects, you can only build so many to-do list apps. Things are going to get harder as you build more apps, and you need to be ready for it. Nowadays, apps deal with a lot of media. Whether it be user profile images, videos, audio, or GIFs. As a developer, you need a place to store these images.
But where?
If you said database, you might be a little wrong. Storing images directly in your database is not only a complex process (I say complex because I looked into the option, and it definitely didn't look like ABC to me), but it also comes with performance and security issues.
A better solution for this is the cloud. Enter Cloudinary.
In simple terms, Cloudinary is a SaaS (Software as a Service) that enables users to upload, manage, and deliver media for their apps and websites. You upload your media to Cloudinary, and then you get to use that media in your app through a CDN (Content Delivery Network) that Cloudinary provides.
What are we building exactly?
In this article, I'll show you how to build a small app that accepts images through a form, sends the image to Cloudinary, stores the image's CDN link in our MongoDB database, and then retrieves the image back in the front end.
I promise that it's not as complicated as I might have made it sound.
This is what our final app is going to look like;
If you look closely at the console, you see an image object with two properties, a publicId
and a url
. This is the response gotten after submitting our form with an image to the backend.
Prerequisites
To be able to follow through the steps with ease, you'll need a basic understanding of the core technologies in the MERN stack;
MongoDB
Express.js
React.js
Node.js
Don't worry if you're not an expert (God knows I'm not), because I'll be showing you how most things work at each step.
To spice our styling up a little, I'll be using TailwindCSS. It's okay if you're not a fan of it. We still accept you.
The Backend
Our starting point is the backend. To initialize this project let's create a directory called my-fave-project.
If you're coding among friends and must absolutely look cool, then open up your terminal/command prompt and run the following commands to easily do this;
cd <the directory in your computer where you want to create the folder>
mkdir my-fave-project
cd my-fave-project
mkdir my-profile
cd my-profile
When typing out the first line, make sure that you specify the folder in your computer where you want the project to be. I created mine in the Desktop directory. The fourth line creates the my-profile
folder, and the fifth line lets us navigate into it. Another way is to create the my-profile
folder directly from your IDE when creating a new project.
Inside my-profile
, you want to create another folder called backend
. Later on, we'll create a frontend folder so that the two are under the same project, but live in distinct folders for clarity.
So go ahead and hit mkdir backend
and then cd backend
. You can now open this folder up in your IDE of choice.
Now that we're in the backend folder, we'll need to install dependencies. Tiny little libraries and components that our app will need in order to work. Before that, go ahead and run npm init -y
to create a package.json
file.
As for dependencies, we're going to need;
cloudinary: For us to be able to use functions from Cloudinary, we'll need this package.
cors: Cross-Origin Resource Sharing errors will arise because we are serving the frontend and backend from different domains. We need the frontend to access backend resources served outside of it's domain, and this package will enable us create a middleware bypasses cors issues.
dotenv: We're going to be dealing with environment variables, and this package helps us use these variables with process.env.
mongoose: This library makes working with MongoDB and Nodejs easier.
nodemon: Whenever we make changes in our code, it's just convenient that our server automatically restarts.
express: This small backend framework is what will help us easily build the APIs that we need.
It is worth noting that most of the aforementioned packages can also be installed with Yarn, if that is your package manager of choice.
The Server
Now that we have our dependencies out of the way, we can go ahead and spin up our express server.
In the root of your backend folder, create a server.js
and a .env
file. Inside the .env
file, add PORT = 5000
as the only line in it for now. Our server will be hosted on this port.
Now, in the server.js file, our first three lines would be to import the necessary frameworks and packages we need to start the server;
require("dotenv").config()
const express = require('express')
const cors = require('cors')
We can now initialize the app, and bring in middleware;
//initialize the app
const app = express()
//middleware
app.use(cors())
app.use(express.json())
app.use((req, res, next) => {
console.log(req.path, req.method)
next()
})
As I explained before, we need cors to handle any Cross-Origin Resource Sharing errors we may come across. The second middleware is used to parse JSON requests, and the third middleware will enable us to see any request that is being sent to the server in our console. These requests are usually of the form /user POST
for example, where we have the request path and the request method that were sent.
After all these initializations, we can listen for requests;
//listen for requests
app.listen(process.env.PORT, () => {
console.log(`Server is running on port ${process.env.PORT}`)
})
Notice how we used the process.env.PORT
to reference the port number we specified in our .env
file.
To see if the server is running, type nodemon server
in your console and hit enter.
You should see a "Server is running on port 5000" in your console if everything is working fine.
The Database
We're going to be using MongoDB for this article, and since I already assume basic knowledge, we're not going to go into the depths of what it is. Collections, documents, models, schemas, etc are some of the terminologies we'll encounter in this section. If you have trouble understanding what they are, I suggest you do a little bit of reading before you continue.
Let's create a new project and database in our Mongodb cloud and name it My-profile
. Take note of your username and password, because we'll need it for the connection string.
To now connect our Node.js backend to the database, create a new folder in the root of the project and name it config
. Inside config
, create a new file, db.js
.
Inside it, paste the following code;
const mongoose = require('mongoose')
const connectDB = async() => {
try{
const conn = await mongoose.connect(process.env.MONGO_URI, {
useNewUrlParser: true,
useUnifiedTopology: true
})
console.log(`MongoDB connected: ${conn.connection.host}`)
} catch (error) {
console.error(`Error: ${error.message}`)
process.exit(1)
}
}
module.exports = connectDB
The connectDB
function above uses the connection string, which we have provided as an environment variable, to connect to our MongoDB database. It logs a success message if the connection is successful, and an error message if otherwise.
In your .env
file, let's create a new variable, MONGO_URI
;
PORT = 5000
MONGO_URI = mongodb+srv://<username>:<password>@cluster0.vaf008n.mongodb.net/?retryWrites=true&w=majority
Make sure to replace username
and password
with your own values.
Go back to your server.js
file, and import the connectDB
function. Right before you create the express app, call the function. Our final server.js file with our server running in the console, and our connected database will finally look like this;
The Schema
Now that we have connections done, let's start building up the structure of our application. Schemas tell us about the structure of the data that is stored in our database. We apply the schema to a model so that we can perform certain functions on it.
For this app, our schema will be nothing complicated. We only need three objects to store the user's name, username, and data for the image.
In the root of our project directory, let's create a folder called models
, and inside of it, a file, userModel.js
;
const mongoose = require('mongoose')
const Schema = mongoose.Schema
const userSchema = new Schema(
{
name: {
type: String,
required: true,
},
username: {
type: String,
required: true,
},
image:{
publicId:{
type: String,
required: true,
},
url: {
type: String,
required: true,
}
}
}, {timestamps: true}
)
module.exports = mongoose.model('User', userSchema)
The very first line imports mongoose because we'll need it to create the schema. The schema variable defined on the second line creates a reference to the Schema class which exists in mongoose. To create a schema, we make use of the schema constructor, which takes an object as its argument. It is inside this object that we will define the structure of the data we want to store in our database.
We have the user's name which is a string, their username, and the image object. The image object takes in two properties; the publicId
and the url
, which are provided to us by Cloudinary. Our main focus is the url
, because it is what we can eventually use in our frontend to get the image.
Before we jump into creating the controller, let's handle our Cloudinary functions first.
Cloudinary
Head over to cloudinary.com and create an account, or login if you've visited before. You're ported over to a Getting Started page, where you are provided with configurations for different SDKs. Under the Node.js SDK, copy the cloud name, API secret, and key.
Back in our IDE, let's create a new folder called services
, and then a cloudinary.js
file in it. This is where we will create the function that uploads the image sent through the form to Cloudinary.
We begin by importing the cloudinary package we installed, and then providing our configuration details;
const cloudinary = require("cloudinary").v2
cloudinary.config({
cloud_name: process.env.CLOUD_NAME,
api_key: process.env.API_KEY,
api_secret: process.env.API_SECRET
})
Don't forget to actually create these environment variables in your .env
file. Beneath the config, we can now create our upload function;
const uploadToCloudinary = async (path, folder = "my-profile") => {
try {
const data = await cloudinary.uploader.upload(path, { folder: folder });
return { url: data.secure_url, publicId: data.public_id };
} catch (err) {
console.log(err);
throw err;
}
};
module.exports = { uploadToCloudinary}
All the function above does, is uploads the image it receives from the frontend to Cloudinary (specifically to the my-profile folder, so that the images stay in one place) through the cloudinary.uploader.upload()
method, and then returns a secure_url
(a CDN link to that image), and a public_id
. We export the function at the end so that we can use it in another file.
And that's it for the Cloudinary part.
The Controller
So far, we've created and run our server, handled the schema, and created a function that uploads an image to Cloudinary. We now need to create functions that will run whenever we make requests. In other words, we need to create methods for our model.
Since we are already in the spirit of splitting our app into folders for clarity, create another folder called controllers
, and then a userController.js
in it.
Our controller will only have one function, which handles the request that creates and stores the user in the database.
In our userController
, we start by importing necessary packages and functions. We're going to need our model and the Cloudinary function we created before.
const User = require('../models/userModel')
const { uploadToCloudinary } = require('../services/cloudinary')
As for our function,
const createUser = async (req, res) => {
const {name, username, image} = req.body
try{
let imageData = {}
if(image){
const results = await uploadToCloudinary(image, "my-profile")
imageData = results
}
const user = await User.create({
name,
username,
image: imageData
})
res.status(200).json(user)
} catch(e) {
res.status(500).json({error: "A server error occurred with this request"})
}
}
module.exports = { createUser }
The function receives the user's name, username, and image through the request body. We start by creating an empty object in which we will store the data we get back from Cloudinary. If the image was truly sent from the frontend, we call the upload function that we imported and pass in the image to it. We know that we'll get back the url
and publicId
from Cloudinary after the image is uploaded, so we store that returned result in the imageData
object that we created.
After that function is run, we can now create the user document and send the name, username, and image (since imageData
has the data for our image, we send it as the value for our image field). Our server sends a 200 status code response if the request is successful, and a server error if something obstructed the request.
Let's export that function because we'll need it for our routes. So our controller file looks like this overall;
The Route
This is the last step of our backend setup. With routes, we'll be able to send requests from our frontend to our backend.
You already know the drill. Create a routes
folder, and then a user.js
file inside of it.
const express = require("express")
//controller functions
const { createUser } = require('../controllers/userController')
const router = express.Router()
//create user route
router.post('/', createUser)
module.exports = router
Our router isn't much. express.Router()
creates a new router object which we'll use to define our routes. The only route we need is to create the user, and this is obviously a POST request, hence the post method.
After exporting the router, let's head back into our server.js
and import our route.
require("dotenv").config()
const express = require('express')
const cors = require('cors')
const connectDB = require('./config/db')
const userRoute = require('./routes/user')
//initialize the app
connectDB()
const app = express()
//middleware
app.use(cors())
app.use(express.json())
app.use((req, res, next) => {
console.log(req.path, req.method)
next()
})
//routes
app.use('/api/user', userRoute)
//listen for requests
app.listen(process.env.PORT, () => {
console.log(`Server is running on port ${process.env.PORT}`)
})
You can see that we imported the route at the top of the file. When a request is made to the /api/user
route, the app.use
method will execute the userRoute
router object, and the controller linked to this router object handles the request and sends a response.
That's about it for our backend! Let's jump into the client side of things now.
The Client
Since all of our server side is concealed in the backend folder we created at the start, let's create a frontend
folder to house our client side.
We'll only need to install React and TailwindCSS. Don't forget to leave from the backend folder to the frontend folder.
To install React first with Vite, we use the command npm create vite@latest . -- --template react
to install the library in the directory we are in. The console then prompts us to run npm install
and npm run dev
to create our node_modules
folder and start the server respectively.
To install Tailwind, the docs provide us with the necessary steps.
Feel free to clear the boilerplate of anything you don't want. This is what my file tree looks like after removing unnecessary files.
You can quickly test if Tailwind works by creating a p
tag in your App.jsx
and giving it some text and color.
We can now dive into our code now. Let's create a pages
folder inside src
, and create two files in it, Form.jsx
and Profile.jsx
. Make sure to import the Form.jsx
file in your App.jsx
because it's the page we want to see when the app first loads.
To make things easier and faster for me, I decided to use a TailwindCSS component library called HyperUI. It provides free open-source components that you can copy into your project and edit according to your needs. If you don't want to use a component library, feel free to build your components from scratch. This is just a step to make things go faster.
Let's copy a form component from HyperUI, and edit it to make it work for us. If you use one of their form components, make sure to install the TailwindCSS forms plugin.
import { useState } from "react"
const Form = () => {
const [name, setName] = useState("")
const [username, setUsername] = useState("")
return (
<div className="mx-auto max-w-screen-xl px-4 py-16 sm:px-6 lg:px-8">
<div className="mx-auto max-w-lg text-center">
<h1 className="text-2xl font-bold sm:text-3xl">Hello there!</h1>
<p className="mt-4 text-gray-500">
Please provide your name, username, and profile image
</p>
</div>
<form action="" className="mx-auto mb-0 mt-8 max-w-md space-y-4">
<div>
<label for="name" >Name</label>
<div className="relative">
<input
type="text"
className="w-full rounded-lg border-gray-200 p-4 pe-12 text-sm shadow-sm"
placeholder="Enter your name"
value={name}
onChange={(e) => setName(e.target.value)}
/>
</div>
</div>
<div>
<label for="username" >Username</label>
<div className="relative">
<input
type="text"
className="w-full rounded-lg border-gray-200 p-4 pe-12 text-sm shadow-sm"
placeholder="Enter username"
value={username}
onChange={(e) => setUsername(e.target.value)}
/>
</div>
</div>
<div>
<label
htmlFor="profile_image"
className="block font-medium text-deepgray"
>
Upload image
</label>
<input
name="image"
className="w-full rounded-lg border-gray-200 p-3 text-sm"
placeholder="Image"
type="file"
accept="image/*"
id="image"
onChange={handleImage}
/>
</div>
<div className="flex items-center justify-between">
<button
type="submit"
className="inline-block rounded-lg bg-blue-500 px-5 py-3 text-sm font-medium text-white"
>
Submit
</button>
</div>
</form>
</div>
)
}
export default Form
In the component above, we have three input fields for the user's name, username, and their image. We use the useState
hook to collect the input values for the name and username. For the image, we'll do something slightly different.
This is how our form looks like on the browser by the way;
Now, let's handle the image input and create the handleImage
function.
Create two new states in addition to the ones we have, and then add the following functions;
const [image, setImage] = useState(null);
const [imageBase64, setImageBase64] = useState("");
// convert image file to base64
const setFileToBase64 = (file) => {
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onloadend = () => {
setImageBase64(reader.result);
};
};
// receive file from form
const handleImage = (e) => {
const file = e.target.files[0];
setImage(file);
setFileToBase64(file);
};
The handleImage
function is what receives or collects the image that is entered in the form. It then runs the setFileToBase64
function, to convert the image to base 64. You can read up about pros and cons of converting images to base 64, but note that there are obviously other ways to handle images in forms.
The POST Request
Now that we are done collecting the necessary inputs from our form, let's send our first request to the backend. We need a function that sends a POST request when we click our submit button.
Let's create two more states, data
and Loading
;
const [loading, setLoading] = useState(false);
const [data, setData] = useState(null);
The first state will handle our loading states so that we don't leave users hanging when they send a request. The second state will store whatever data we send to the database.
As for the handleSubmit
function,
const handleSubmit = async (e) => {
e.preventDefault()
setLoading(true)
const response = await fetch("http://localhost:5000/api/user", {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({name, username, image: imageBase64})
})
const json = await response.json()
if(response.ok){
setLoading(false)
setData(json)
console.log(json)
}
}
Don't forget to call the function in the opening form
tag with the onSubmit
event handler.
If you try to fill out the form and send it, you might get a response in your frontend, and you can see the image in your Cloudinary dashboard if you navigate to your folders;
If you take a look at your MongoDB, you'll also notice a new document in the database you created! You can see the image URL stored in our database too. If you paste the link in an empty tab, it opens the image directly.
The End
And that's it for all the magic we've been performing in this article. There is still so much you can do to enhance the solution we've implemented. But after this step, I hope it won't be so hard to grab the images you've stored in your backend and use them to your liking.
I had so many difficulties dealing with a similar problem before, so I thought writing an article could help anyone else figure their own bugs out. As a final note, I hope this breakdown is clear and easy enough to follow, and I can't wait to hear any other opinions you may have in the comments.
Peace out.
Top comments (2)
Hi Njong Emy, thanks you for this post, it is very detailed and comprehensive.
But I just want to list here more steps that we need to follow if we want build this project successfully.
Backend: need to apply these code in server/index.js
This help increase limit of request size that allowed on BE side to make FE req successfull ( if not, you will meet the error 413 ( payload too large with base64 string)).
app.use(express.json({ limit: "50mb" }));
app.use(express.urlencoded({ limit: "50mb", extended: true }));
Cloudinary setup :
If you take the cloud_name, api_key, api_secret from your cloudinary console and put all of them to .env file, then pass those enviroment variables again to cloudinary.config() like this :
import {v2 as cloudinary} from 'cloudinary';
cloudinary.config({
cloud_name: process.env.CLOUD_NAME,
api_key:process.env.KEY,
api_secret: process.env.SECRET
});
=> it may be lead to error: "Must supply api_key"
if you meet this error you can try this way:
just coppy all code from your cloudinary console and use, don't pass it to .env anymore.
ex:
import {v2 as cloudinary} from 'cloudinary';
cloudinary.config({
cloud_name: 'someone',
api_key: '990809898',
api_secret: '***************************'
});
Thank you very much for the troubleshooting tips! And yes, I did come across the heavy payload error at some point, but it didn't bug me out when I was writing the code for this project, so I thought omitting it would be okay. Thanks again for bringing it up as someone may possibly face the same issue šš