Properly Backing up PostgreSQL

Published on April 26th, 2018


Disaster befalls databases set up by even the most competent of developers, and I am far from the most competent of developers. With that in mind, it makes all the sense in the world that I would be a practiced hand at setting up databases and backups so as to save myself from my own inevitable mistakes. Unfortunately, I also get distracted by shiny things and run out of time to follow best practices, so I'm awful when it comes to making sure that my sites have good database backups.

The CMS I'm currently typing this into (Wagtail powered by PostgreSQL) is a perfect example of this: the the database has no backup scripts currently running. If something bad happened to this server, I'd have to go hat-in-hand to the Wayback Machine and pray that they've archived my pages.

But this is no way to live! I want to rest assured that my site would be fine if my database ever decided to stop being a database and instead move to Jamaica to teach guitar to drunk tourists on the beach. I want a script that I can throw into a cron job, and forget about it. Specifically, the script should:

  1. Create compressed backups in a sensible directory 
  2. Contain all PostgreSQL databases contained on the server
  3. Clean up after itself so that the script doesn't fill up the hard disk and bring down the server it was supposed to protect.

You Either Die A Hero

The Script

My preferred script is taken from a version we've made at Atlantic Media, which is in turn taken from a script produced by Aaron Axelsen in 2005, which is a port of an older MySQL backup utility informed by Friedrich Lobenstock, with work based off of a MySQL backup utility at If I have prevented myself from losing a site to dumb migration issues, it is by standing on the shoulders of these giants.

Download the script from my github repo and place it in /var/lib/pgsql/. Then, open your root user's cron file with crontab -e and add the following:

0 4 * * * /var/lib/pgsql/ >> /var/log/postgres_backup.log 2>&1

And that.. is it! You may now do the database equivalent of running with scissors on this server, and restore from compressed, auto-rotating backups as needed. The script will run at 4:00am every day, making backups as appropriate and deleting old backups as needed. You can look through monthly or weekly archives, or restore from the previous day's backup in the event of a disaster. The world is your oyster!

Next Steps

Technically, if the server went down or got lost, we'd be in bad shape. The backups can and should live elsewhere as a means of avoiding total data loss. This can be accomplished by either mounting the backup directory to an NFS, or by grabbing the backups periodically and saving them to a second server and/or your local machine.

But it's like my great-grandpa used to say: "Don't let the perfect devops platform be the enemy of incrementally-better database recovery mechanisms" (the man was ahead of his time). With this script running, we can all sleep much better at night.