Introduction:
Creating a web application that is accessible to users from different locations while keeping their data secure is essential in today's digital world. In this blog post, we will guide you through the process of setting up a Nginx reverse proxy in Docker, with the frontend built using React and the backend using Node.js. We will provide a step-by-step guide to creating an Nginx reverse proxy
that only allows access to the client and server
through Nginx, without exposing any ports from Docker, making the application more secure. By the end of this blog post, you will have a comprehensive understanding of how to create a secure and robust web application that can handle traffic from all over the world, ensuring your users' data remains safe. So let's dive in and get started!
Prerequisites
Before getting started with the setup, there are a few prerequisites that need to be in place. Firstly, you will need to have Node.js and npm installed on your system to install and run the React and Node.js applications. Additionally, you will need to install Docker and Docker Compose, which will be used to set up the Nginx reverse proxy.
If you are unfamiliar with Nginx, it is a popular open-source web server that is used for serving web content, reverse proxying, and more. In this setup, we will use Nginx as a reverse proxy to manage incoming requests to the web application.
To begin, you need to create a directory that will serve as the root directory for the web application.
Setting up the Frontend with React:
To create a new React application, you can use the create-react-app CLI
tool. Here's how you can run the command to create a new React app named client:
- Open a terminal window in the root directory of your project.
- Run the command
npx create-react-app client
. This will create a new React app in a directory named client. - Wait for the command to finish running. This may take a few minutes, depending on your internet speed and computer performance.
- Once the command has finished running, you can navigate into the client directory by running the command cd client.
- From here, you can start the development server by running the command npm start. This will open the app in your default web browser at http://localhost:3000.
- Note that the name client is used in this example for the React app, but you can choose any other name that does not include the word react. This is because react is a reserved name and may cause conflicts with other packages.
Change directory to /client/src
. Open your preferred code editor and load the file "App.js".
Final Client Code
import React, { useState } from 'react';
import './App.css';
function App() {
const [users, setUsers] = useState([]);
const fetchData = async () => {
const response = await fetch('/api/users/');
const data = await response.json();
setUsers(data['users']);
};
return (
<div className="App">
<button onClick={fetchData}>Fetch User Data</button>
{users.length > 0 && (
<ul>
{users.map((user) => (
<li key={user._id}>{user.name}</li>
))}
</ul>
)}
</div>
);
}
export default App;
The provided code is an example of a React functional component that displays a button to fetch user data from a Node.js backend API. When the button is clicked, the component triggers the fetchData function using async/await syntax. The function makes a request to the /api/users endpoint of the Node.js backend, extracts the "users" property from the JSON response, and updates the "users" state variable using the setUsers function.
Setting up the Backend with Node.js:
To set up the backend of the web application, you need to create a folder named server
and navigate to it using the command line. Next, you can run the command npm init -y
to create a package.json file in the folder, which will be used to manage the project dependencies.
Afterwards, you will need to create two files in the server
folder: server.Dockerfile
and server.js
. The server.js
file contains the backend code that will be executed when you run the command npm start
.
To add necessary functionality to the backend of the web application, you will need to install two Node.js modules, express
and cors
. Express
is a popular web framework that simplifies the creation of server-side applications, while cors
is a middleware that enables cross-origin resource sharing.
Also run, npm install --save-dev nodemon
to install nodemon package as development dependency. Once nodemon is installed, you can use it to automatically restart your Node.js application whenever changes are made to the code. To start your application with nodemon, simply replace the node
command with nodemon
in your start script in package.json.
{
"name": "server",
"version": "1.0.0",
"description": "",
"main": "server.js",
"type": "module",
"scripts": {
"start": "nodemon server.js",
"prod": "node server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"cors": "^2.8.5",
"express": "^4.18.2"
},
"devDependencies": {
"nodemon": "^2.0.22"
}
}
Package.json has been modified to prevent the server from failing due to configuration errors.
Final Server Code
import express, { Router } from "express";
import cors from 'cors';
const app = express();
const port = 8000;
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(cors());
const users = [
{
'_id': 1,
'name': 'Ritwik Math'
},{
'_id': 2,
'name': 'John Doe'
},{
'_id': 3,
'name': 'Jane Doe'
}
]
const promise = new Promise((resolve, reject) => {
setTimeout(() => {
resolve(users)
}, 3000)
})
const route = Router();
route.get('/users', async (req, res, next) => {
try {
const data = await promise;
return res.status(200).json({
'message': 'Fetched successfully',
'users': data
})
} catch (error) {
console.log(error.message)
}
});
app.use(route);
app.listen(port, () => {
console.log(`App is running on http://localhost:${port}`)
})
This code creates a new Promise that resolves with an array of users after a 3-second delay. Then, an HTTP route is defined using the Express.js Router. This route listens for GET requests to the /users endpoint and uses async/await syntax to fetch the users data from the Promise and return it as a JSON response. When a user makes a GET request to the /users endpoint, the async function defined for this route is executed. This function waits for the Promise to resolve using await, and then returns a JSON response with the fetched data. If the Promise is rejected for any reason, the catch block will execute and log the error message to the console. This code demonstrates how Promises can be used to handle asynchronous operations in JavaScript, and how async/await syntax can make it easier to write asynchronous code that looks and behaves more like synchronous code.
Test result
- Open the App.js file of your React app in a code editor.
- Find the part of the code where the URL for the API endpoint
/api/users
is defined. - Change the URL in this code to http://localhost:8000/users. Make sure you save the file after making the change.
- Start your React app by running the command npm start in your terminal.
- Open your web browser and navigate to http://localhost:3000 (or the URL your app is running on).
- If everything is working properly, you should see a list of users displayed in the browser that is fetched from the http://localhost:8000/users endpoint. If you see the list, congratulations! You've successfully setup the backend and front end.
Change the url back to
/api/users
.
Setting up the Nginx Reverse Proxy in Localhost:
This example requires setting up nginx configuration using port 80. Ensure that port 80 is available on your system, or choose a different port. If you use a different port, remember to include it in the URL, for instance, /api/users:81
.
To implement reverse proxy using nginx container, create a directory named nginx
and within it, create a file named nginx.conf
. This file will be active in the nginx container.
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api/ {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
To load the Nginx configuration from the appropriate directory, you need to copy the nginx.conf file to /etc/nginx/conf.d. To do this, run the following command with sudo privileges: sudo cp nginx/nginx.conf /etc/nginx/conf.d/
. Default .conf file of nginx loads configirations from this directory.
The setup acts as a reverse proxy on the local host, enabling access to the client side via http://localhost and the server side via http://localhost/api. The client and server upstream are designated as http://127.0.0.1:3000/ and http://127.0.0.1:8000/ respectively.
Docker Environment Setup
Create client.Dockerfile and server.Dockerfile in client and server directory respectively. To establish the necessary Docker environment, we've taken a few important steps. First, we've configured the Dockerfile for both the client and server components. Additionally, we've crafted a customized docker-compose file that will orchestrate the deployment and operation of the entire application stack. By leveraging these foundational components of Docker's infrastructure, we can ensure a streamlined, efficient and scalable approach to our software deployment process.
client.Dockerfile
FROM node:18
WORKDIR /app
COPY ./ ./
RUN npm install
EXPOSE 3000
CMD ["npm", "run", "start"]
server.Dockerfile
FROM node:18
WORKDIR /app
COPY ./ ./
RUN npm install
EXPOSE 8000
CMD ["npm", "run", "start"]
Above two Dockerfiles used to build a Docker image for a React and Node.js applications respectively.
- FROM node:18 specifies the base image to use for this application. In this case, it uses the official Node.js Docker image with version 18.
-
WORKDIR **/app**
sets the working directory of the container to /app. This is where the application code and files will be stored. -
COPY ./ ./
copies the contents of the current directory (where the Dockerfile is located) to the container's working directory. This includes the application code and files. -
RUN npm install
runs the npm install command in the container. This installs all the dependencies required by the application. -
EXPOSE 8000
exposes port 8000 of the container to the docker network. This means that the application running in the container can be accessed through this port in Docker network. -
CMD ["npm", "run", "start"]
specifies the command to run when the container is started. In this case, it runs the npm run start command, which starts the application. This command will be executed automatically when the container is launched. Create docker-compose.yml in root directory. The purpose of docker-compose.yml is to define and orchestrate the deployment and operation of multiple Docker containers that work together to make up a complete application stack. It enables developers to define and configure the various services required by the application, including their dependencies and networking, in a single file. This makes it easier to manage and deploy complex applications in a consistent and repeatable manner. ```yaml
networks:
dev:
driver: bridge
Above markup creates a docker network dev. Now add all the applicaions (services) under `services` block.
```yaml
services:
server:
build:
context: ./server
dockerfile: server.Dockerfile
image: dev-server
container_name: dev_server
tty: true
restart: unless-stopped
working_dir: /app
volumes:
- ./server:/app
- /app/node_modules
networks:
- dev
Above configuration shows a "server" service defined within a docker-compose.yml
file. Let's take a closer look at each configuration option:
build
: Specifies how the container image will be built. In this case, it references a Dockerfile located in the./server
directory.image
: Specifies the name for the resulting image that will be created from theDockerfile
.container_name
: Sets the name for the running container. In this case, it's set to "dev_server".tty
: Allocates a pseudo-TTY for the container, which is necessary for certain interactive processes.restart
: Defines the restart policy for the container. In this example, the container will be automatically restarted unless it is explicitly stopped.working_dir
: Sets the working directory for the container. In this case, it's set to "/app".volumes
: Defines the volumes to be mounted in the container. In this example, theserver
directory is mapped to the/app
directory in the container, and the/app/node_modules
directory is mounted as a named volume.-
networks
: Specifies the network to which the container will be connected. In this example, it's connected to the "dev" network.
docker-compose.ymlnetworks: dev: driver: bridge services: nginx: image: nginx:stable-alpine container_name: nginx restart: always ports: - "80:80" volumes: - ./nginx:/etc/nginx/conf.d/ networks: - dev server: build: context: ./server dockerfile: server.Dockerfile image: dev-server container_name: dev_server tty: true restart: unless-stopped working_dir: /app volumes: - ./server:/app - /app/node_modules networks: - dev client: build: context: ./client dockerfile: client.Dockerfile image: dev-client container_name: dev_client tty: true restart: unless-stopped working_dir: /app volumes: - ./client:/app - /app/node_modules networks: - dev
Change the nginx.conf within nginx directory to following.
nginx.confupstream client { server client:3000; } upstream server { server server:8000; } server { listen 80; location / { proxy_pass http://client/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /api/ { proxy_pass http://server/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
In a Docker network, containers can communicate with each other by their container names as if they were hostnames. When you define a service in a Docker Compose file, a network is automatically created and all the containers for that service are connected to that network.
In the context of Nginx, an upstream block defines a group of servers that can be used as a proxy for client requests. It is typically used when you want to distribute incoming traffic across multiple servers or when you want to implement a load balancer.
Runsudo docker compose up -d
where docker-compose.yaml is located. The docker-compose up -d command is used to start Docker containers defined in a Docker Compose file in the background and detach the terminal from the container's console output. Access the client side through http://localhost/
Docker StatsResources:
Top comments (2)
Hey, I found your tutorial very helpful. I have been trying to set something similar up but Nginx is not connecting to the express server. Do you mind going through the code for me?
Here's a link to the code:
github.com/alahirajeffrey/nginx-ra...
Thank you
Thank you for the very comprehensive guide! Do you already know about
smnrp.net?