Backup in Linux Servers – Docker Volumes, and Databases

In this tutorial, I’ll show you how to easily create a scheduled Backup for your entire Linux Server. You can also use this strategy to backup Docker Volumes and Databases.

Redundancy is not a Backup!

Some people still have this misconception considering a redundancy option a valid backup. If you’re running a hardware RAID or a ZFS filesystem you can mirror hard drives to prevent data loss due to a hard disk error. But this is not a reliable backup solution. Because mirroring copies all files and all changes in real-time.

This also means, if you’re accidentally deleting files or something goes wrong during an update, you can’t restore those files with a redundancy option.

The same problem exists for a simple synchronizing job, which overwrites the data. You are maybe able to restore files or changes you face immediately. But if you’re overwriting the changes every single day, you can’t restore older files.

Backup in Linux Strategy

A reliable and solid backup strategy should create “incremental backups”. An incremental backup will start with a full copy of your files. But once, you start the next backup it will only copy the changes, since the last backup. And it also retains the previous state of the backup. This means we can go back in time and restore files from specific timestamps when we have done a backup.

We should also be able to store our backup files encrypted and in a separate location. Because if anything goes wrong with our server and we need to do disaster recovery, we should be still able to access our backup files.

Deploy Duplicati to Backup our Linux Server

Duplicati is free and open-source and can create such incremental backups and store them with strong encryption. It supports standard protocols like FTP, SSH, WebDav, etc but also proprietary cloud providers.

It can be deployed in a simple docker container, using the container image from linuxserver.io.

You can also use Portainer, to deploy it in a nice graphical way.

---
version: "2.1"
services:
  duplicati:
    image: ghcr.io/linuxserver/duplicati
    container_name: duplicati
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - CLI_ARGS= #optional
    volumes:
      - </path/to/appdata/config>:/config
      - </path/to/backups>:/backups
      - </path/to/source>:/source
    ports:
      - 8200:8200
    restart: unless-stopped

Backup Strategy for Database Containers

In Duplicati, we can create scheduled backups and set up data retention. I like to use the “smart backup retention” in combination with a backup job that runs every day.

Duplicati Web UI
Smart Backup Retention

Script to Backup to Dump Linux Databases

If you need to backup databases inside your container, you could technically just copy the database files in the filesystem. But this is not a reliable solution for a database, because databases work differently than normal files.

The problem always occurs when those database files are large and updates are frequently happening. Because if you start a backup and the database server needs to apply changes to the file before the backup job has been finished, you can face inconsistencies in the database entries. In the worst case, it may even corrupt the complete database which makes the backup nearly useless.

Therefore we need to create a bash script for all our database containers, that will create database dumps. Later we can use Duplicati to copy these dumps to a separate location.

Here is an example bash script I’ve created to do an automatic dump of all my MySQL/MariaDB containers and also my bitwardenrs instance, which uses an SQLite database.

You will always find a new updated version on my GitHub repository.

#!/bin/bash

# DB Container Backup Script Template
# ---
# This backup script can be used to automatically backup databases in docker containers.
# It currently supports mariadb, mysql and bitwardenrs containers.
# 

DAYS=2
BACKUPDIR=/home/xcad/backup


# backup all mysql/mariadb containers

CONTAINER=$(docker ps --format '{{.Names}}:{{.Image}}' | grep 'mysql\|mariadb' | cut -d":" -f1)

echo $CONTAINER

if [ ! -d $BACKUPDIR ]; then
    mkdir -p $BACKUPDIR
fi

for i in $CONTAINER; do
    MYSQL_DATABASE=$(docker exec $i env | grep MYSQL_DATABASE |cut -d"=" -f2)
    MYSQL_PWD=$(docker exec $i env | grep MYSQL_ROOT_PASSWORD |cut -d"=" -f2)

    docker exec -e MYSQL_DATABASE=$MYSQL_DATABASE -e MYSQL_PWD=$MYSQL_PWD \
        $i /usr/bin/mysqldump -u root $MYSQL_DATABASE \
        | gzip > $BACKUPDIR/$i-$MYSQL_DATABASE-$(date +"%Y%m%d%H%M").sql.gz

    OLD_BACKUPS=$(ls -1 $BACKUPDIR/$i*.gz |wc -l)
    if [ $OLD_BACKUPS -gt $DAYS ]; then
        find $BACKUPDIR -name "$i*.gz" -daystart -mtime +$DAYS -delete
    fi
done


# bitwarden backup

BITWARDEN_CONTAINERS=$(docker ps --format '{{.Names}}:{{.Image}}' | grep 'bitwardenrs' | cut -d":" -f1)

for i in $BITWARDEN_CONTAINERS; do
    docker exec  $i /usr/bin/sqlite3 data/db.sqlite3 .dump \
        | gzip > $BACKUPDIR/$i-$(date +"%Y%m%d%H%M").sql.gz

    OLD_BITWARDEN_BACKUPS=$(ls -1 $BACKUPDIR/$i*.gz |wc -l)
    if [ $OLD_BITWARDEN_BACKUPS -gt $DAYS ]; then
        find $BACKUPDIR -name "$i*.gz" -daystart -mtime +$DAYS -delete
    fi
done

echo "$TIMESTAMP Backup for Databases completed"

Run the Backup Script with Systemd

You can also automatically run this script every day at midnight with the following systemd files. Just copy the following two files db-backup.service and db-backup.timer in the /etc/systemd/system directory and enable it.

db-backup.service

[Unit]
Description=DB Container Backup Script
Wants=db-container-backup.timer
[Service]
Type=simple
ExecStart=sh db-container-backup.sh
User=xcad
[Install]
WantedBy=default.target

db-backup.timer

[Unit]
Description=DB Container Backup Daily Job
[Timer]
OnCalendar=daily
Persistent=true
Unit=db-container-backup.service
[Install]
WantedBy=timers.target

Enable it with the following command.

sudo systemctl enable db-backup --now

Leave a Comment

I accept the Privacy Policy