DEV Community

paschal
paschal

Posted on

How to Automate Dumping for SQL Databases

Introduction 😸

Due to the ever-increasing probability of things going haywire in production, the need for redundancy cannot be overemphasized. This leads us to create a simple automated flow of backing up our DB. Hosting and Deployment services like AWS already provide this, but we will be doing a little hack for smaller projects.

Pipeline 👽

We will be using a remote psql(postgres) database and a linux terminal for this example. Users on Windows OS can run the windows susbsystem for Linux(WSL).

Image for Workflow

So we will create a dump of our live DB, and store the dump.sql file and then obviously, we don't want it on our local host for security purposes, so we'll finally transfer our dump to a secure server.

We will then assign this script to a cron job, and then our dumps will be created on the days we choose.

Automate.sh

The bash script that will handle this will be named 'Automate.sh'. Let's write our script.

#!/bin/bash
#Filename: Automate.sh
#Description: Create a dump of my PSQL DB

pg_dump -U 'YOUR USERNAME' -h 'YOUR HOSTNAME' -d 'DATABASE NAME' > dump.sql
Enter fullscreen mode Exit fullscreen mode

With the 'pg_dump' tool, we can easily create a dump of our database. When this code is run, a password prompt will come up and then we can put in the password of the database. Manually inputting the password defeats the purpose of the automation, so we can attach a '--no-password' flag to stop the prompt from coming up and then we can pass the password when running the script.

Our Automate.sh file should look like this now

pg_dump -U 'YOUR USERNAME' -h 'YOUR HOSTNAME' -d 'DATABASE NAME' --no-password > dump.sql
Enter fullscreen mode Exit fullscreen mode

Then we can run the script like this:

root@L: PGPASSWORD='YOUR_PASSWORD' ./Automate.sh
Enter fullscreen mode Exit fullscreen mode

We can also add the password to our environmental variables when scheduling our cron job. After the dump file is created, we want to send this file to a secure server, an AWS Instance for example, and then we can delete the file from our localhost.

We can transfer files to remote servers using the scp (Secure copy) tool, and we can quickly install this if we don't have it

sudo apt-get install openssh-client
Enter fullscreen mode Exit fullscreen mode

Our Automate.sh file should look like this now:

#!/bin/bash
#Filename: Automate.sh
#Description: Create a dump of my PSQL DB

pg_dump -U 'YOUR USERNAME' -h 'YOUR HOSTNAME' -d 'DATABASE NAME' > dump.sql

#Transfer file to server
scp -i "PRIVATE_KEY.pem" dump.sql  'SERVER IP'

#Delete file from localhost
if [ $? -eq 0 ];
then
     echo "Backup successful!" && rm dump.sql
else
    echo "Oops!"
fi
Enter fullscreen mode Exit fullscreen mode

To ensure your private key is not overwritten accidentally, we can set permissions to 400 and then we can also set the permissions of our dump.sql file to make it writable and readable.

root@L: chmod 400 'PRIVATE_KEY.pem'
root@L: chmod 777 dump.sql
Enter fullscreen mode Exit fullscreen mode

Scheduling With Cron ✈️

Finally, we can set the job to run by 12:59 AM everyday.

root@L: crontab <<EOF
> PGPASSWORD='YOUR_DB_PASSWORD'
> 59 12 * * * ./Automate.sh
> EOF
Enter fullscreen mode Exit fullscreen mode

Other ways to accomplish this will be by creating a .cron file or using the crontab -e command.

root@L: crontab -e
Enter fullscreen mode Exit fullscreen mode

This opens up a file that allows you schedule your cron.

Top comments (7)

Collapse
 
tsia profile image
tsia

have you thought about what happens when pg_dump fails for some reason?

you would then have an empty dump.sql which is copied to the remote server and overwrites your backup.

Collapse
 
obbap profile image
paschal

That's true, but doesn't it depend on how exactly it fails, the dump file might not necessarily be replaced if it had network connectivity issues. But I see your point, I think there should be an if block that handles errors and sends the content to a mailbox or something.

Collapse
 
tsia profile image
tsia • Edited

as far as i know at least bash always creates the output file

Screenshot

Thread Thread
 
obbap profile image
paschal

it actually does, just tried it. How will you have handled it?

Thread Thread
 
tsia profile image
tsia

i would just use the same if [ $? -eq 0 ]; thing.

maybe also add the date to the file name to prevent it from being overwritten on the remote server

Thread Thread
 
obbap profile image
paschal

That should work or we just move the scp command

pg_dump bla bla bla > dump.sql

if [$? -eq 0]; 
then
   scp -i "" ...
   echo "backup successful" && rm dump.sql
else
   cat dump.sql | mailx 'youremailaddress' -s "backup failure"
   echo "backup failure" && rm dump.sql
Thread Thread
 
tsia profile image
tsia

I would do it like this:

pg_dump... > dump.sql

if [$? -ne 0];
then
    exit 1
fi

scp... && rm dump.sql

usually all the output from a cronjob is sent to you via email so no need for mailx etc.
When pg_dump encounters an error it also should write it to stderr so it won't end up in your dump file. Instead it will be printed to your screen when you run the script manually or get sent to you via mail when it is started as a cronjob

Even shorter solution:

#!/bin/bash

set -o errexit

pg_dump... > dump.sql
scp...

set -o errexit will cause the script to end as soon as any command fails.