Complete DevOps Disaster Recovery Guide: Backup Nginx, Databases and Critical Linux Files

Introduction

Every DevOps engineer eventually faces the same nightmare: a production server suddenly crashes, files disappear after an update, or a database becomes corrupted without warning. In many cases, the real problem is not the crash itself — it is the lack of a proper backup strategy.

In real production environments, backups are not optional. They are part of the infrastructure lifecycle. A server can always be rebuilt, but lost production data can cost businesses money, reputation, and customer trust.

This guide is designed for developers, system administrators, and DevOps engineers who want to build a practical disaster recovery plan in Linux. Instead of only showing commands, we will explain what to backup, why it matters, and how to automate everything using shell scripts and cronjobs.

By the end of this guide, you will know how to backup Nginx configurations, systemd service files, databases, SSL certificates, SSH configurations, and automate everything with a reusable .sh script.


Prerequisites

Before creating a disaster recovery strategy, ensure your Linux server environment is ready.

  • Operating System: Ubuntu 20.04, 22.04, or 24.04 LTS
  • User Access: Sudo privileges
  • Storage: Enough disk space for backup archives
  • Permissions: Root-level access to important directories

In production environments, it is recommended to store backups outside the application server, such as cloud storage, another VM, or NAS storage.


Why Backup Matters in DevOps

A proper DevOps backup strategy focuses on more than databases. Many engineers only backup MySQL and forget application configurations, SSL certificates, or service files.

Since modern applications use CI/CD pipelines (like Azure DevOps, GitHub Actions, or GitLab CI), source code does not need to be backed up from the server. You can simply redeploy the code. However, you cannot easily redeploy lost databases, uploaded media, or server-level configurations.

A complete disaster recovery plan should include:

  • Nginx configuration files
  • Systemd service files
  • Database backups
  • SSL certificates
  • SSH configuration
  • Application environment files (.env)

Backing Up Nginx Files

Nginx acts as the front door of many production systems. Losing these files means websites may stop working completely.

Important Nginx locations:

/etc/nginx
/etc/letsencrypt

Create a backup:

sudo tar -czvf nginx-backup.tar.gz /etc/nginx /etc/letsencrypt

Why this matters: This command backs up website routing configurations, load balancer setups, and SSL certificates. We exclude the web root (source code) here because it should be version-controlled remotely.


Backing Up Systemd Service Files

Many DevOps teams run custom applications as Linux services using systemd. To keep the backup clean, we only want to backup the actual .service files, ignoring unnecessary folders, symlinks, and OS defaults.

Custom systemd files are stored here:

/etc/systemd/system

Backup command using find to strictly target .service files:

sudo find /etc/systemd/system -name "*.service" -type f | sudo tar -czvf systemd-backup.tar.gz -T -

Without these custom service configurations, your applications may not restart automatically after a server reboot.


Important Linux Files to Backup

Linux contains critical files that are often forgotten until disaster strikes.

1. Backup SSH Configuration

sudo tar -czvf ssh-backup.tar.gz /etc/ssh

Losing SSH configuration can lock you out of your own server.

2. Backup Environment Variables

sudo find /home/$USER/webapps -type d -name ".env"

Environment folders usually contain sensitive data excluded from Git/Azure DevOps:

  • Database credentials
  • API keys
  • Secret tokens

3. Backup Entire Configuration Directory

sudo tar -czvf etc-backup.tar.gz /etc

This provides a safety net for restoring system-level configurations.


MySQL Backup

MySQL is one of the most common databases in production.

Backup all databases:

mysqldump -u root -p --all-databases > mysql-backup.sql

Backup a specific database:

mysqldump -u root -p production_db > production_db.sql

Compress backup:

gzip mysql-backup.sql

PostgreSQL Backup

PostgreSQL provides enterprise-grade backup support.

sudo -u postgres pg_dumpall > postgres-backup.sql

Single database backup:

sudo -u postgres pg_dump my_database > my_database.sql

MongoDB Backup

MongoDB uses BSON format for backups.

mongodump --out /backup/mongodb

Compress backup:

tar -czvf mongodb-backup.tar.gz /backup/mongodb

Redis Backup

Redis automatically stores snapshots in RDB format.

Backup Redis data:

sudo cp /var/lib/redis/dump.rdb redis-backup.rdb

Create Automated Backup Script

In real production environments, taking manual backups every day is not practical. DevOps engineers automate the entire disaster recovery process using shell scripts to avoid human mistakes and reduce downtime.

The following production-ready disaster recovery script supports:

  • Multiple Web Applications
  • MySQL Database Backups
  • PostgreSQL Database Backups
  • MongoDB Backup
  • Nginx Configuration
  • Custom Systemd Services (*.service files only)
  • SSL Certificates
  • Environment Directories (.env folders)
  • Media & Upload Files
  • Cron Jobs
  • System Logs
  • Installed Package List

One of the biggest advantages of this script is flexibility. You can enable or disable features using simple true or false values based on your infrastructure needs. Note that IS_SOURCE is set to false because we assume source code is securely maintained in a repository like Azure DevOps.

Step 1: Create the Backup Script

sudo nano disaster-recovery-backup.sh

Paste the following script:

#!/bin/bash

# ======================================================
# DISASTER RECOVERY BACKUP SCRIPT
# ======================================================

set -e

# Source code is in Azure DevOps, no need to backup
IS_SOURCE=false 

IS_DB=true
IS_ENV=true
IS_MEDIA=true
IS_NGINX=true
IS_SYSTEMD=true
IS_SSL=true
IS_CRON=true
IS_LOGS=true
IS_PACKAGE_LIST=true

DATE=$(date +%F-%H-%M-%S)

BACKUP_BASE="/backup/disaster-recovery"
BACKUP_ROOT="$BACKUP_BASE/$DATE"

PROJECT_PATH="/home/$USER/webapps"

MYSQL_USER="root"

POSTGRES_USER="postgres"

mkdir -p "$BACKUP_ROOT"

echo "DISASTER RECOVERY BACKUP STARTED"

# SOURCE BACKUP (Disabled by default)
if [ "$IS_SOURCE" = true ]; then
mkdir -p "$BACKUP_ROOT/source"

for project in "$PROJECT_PATH"/*
do
if [ -d "$project" ]; then
PROJECT_NAME=$(basename "$project")

tar -czf \
"$BACKUP_ROOT/source/${PROJECT_NAME}.tar.gz" \
"$project"
fi
done
fi

# MYSQL BACKUP
if [ "$IS_DB" = true ]; then
if command -v mysql > /dev/null; then

mkdir -p "$BACKUP_ROOT/mysql"

DATABASES=$(mysql \
-u"$MYSQL_USER" \
-p"$MYSQL_PASSWORD" \
-e "SHOW DATABASES;" \
| grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")

for DB in $DATABASES
do
mysqldump \
-u"$MYSQL_USER" \
-p"$MYSQL_PASSWORD" \
--single-transaction \
--routines \
--triggers \
"$DB" \
| gzip > "$BACKUP_ROOT/mysql/${DB}.sql.gz"
done
fi
fi

# POSTGRES BACKUP
if [ "$IS_DB" = true ]; then
if command -v psql > /dev/null; then

mkdir -p "$BACKUP_ROOT/postgres"

DATABASES=$(sudo -u "$POSTGRES_USER" \
psql -t -c \
"SELECT datname FROM pg_database WHERE datistemplate = false;")

for DB in $DATABASES
do
DB=$(echo "$DB" | xargs)

if [ ! -z "$DB" ]; then
sudo -u "$POSTGRES_USER" \
pg_dump "$DB" \
| gzip > "$BACKUP_ROOT/postgres/${DB}.sql.gz"
fi
done
fi
fi

# MONGODB
if command -v mongodump > /dev/null; then
mongodump --out "$BACKUP_ROOT/mongodb"
fi

# ==========================================
# SECRETS / .ENV FOLDER BACKUP
# ==========================================
if [ "$IS_ENV" = true ]; then
# Find directories (-type d) named ".env" inside the webapps folder
find "$PROJECT_PATH" \
-type d \
-name ".env" \
| while read folder
do
  # Get the parent folder name (the project name)
  PROJECT_NAME=$(basename "$(dirname "$folder")")
  
  # Compress the entire .env folder into a secure archive
  tar -czf \
  "$BACKUP_ROOT/secrets-${PROJECT_NAME}.tar.gz" \
  "$folder"
done
fi

# MEDIA
if [ "$IS_MEDIA" = true ]; then
find "$PROJECT_PATH" \
-type d \
\( -name media -o -name uploads \) \
| while read folder
do
PROJECT_NAME=$(basename "$(dirname "$folder")")

tar -czf \
"$BACKUP_ROOT/media-${PROJECT_NAME}.tar.gz" \
"$folder"
done
fi

# NGINX
if [ "$IS_NGINX" = true ]; then
tar -czf \
"$BACKUP_ROOT/nginx.tar.gz" \
/etc/nginx
fi

# SYSTEMD
# Backs up strictly *.service files (ignores symlinks/folders)
if [ "$IS_SYSTEMD" = true ]; then
find /etc/systemd/system -name "*.service" -type f \
| tar -czf "$BACKUP_ROOT/systemd.tar.gz" -T -
fi

# SSL
if [ "$IS_SSL" = true ]; then
[ -d "/etc/letsencrypt" ] && \
tar -czf \
"$BACKUP_ROOT/ssl.tar.gz" \
/etc/letsencrypt
fi

# CRONJOBS
if [ "$IS_CRON" = true ]; then
crontab -l \
> "$BACKUP_ROOT/cronjobs.txt" \
2>/dev/null || true
fi

# LOGS
if [ "$IS_LOGS" = true ]; then
mkdir -p "$BACKUP_ROOT/logs"

[ -d "/var/log/nginx" ] && \
cp -r /var/log/nginx "$BACKUP_ROOT/logs/"
fi

# PACKAGE LIST
if [ "$IS_PACKAGE_LIST" = true ]; then
dpkg --get-selections \
> "$BACKUP_ROOT/installed-packages.txt"
fi

FINAL_ARCHIVE="$BACKUP_BASE/backup-$DATE.tar.gz"

tar -czf \
"$FINAL_ARCHIVE" \
-C "$BACKUP_BASE" \
"$DATE"

echo "BACKUP COMPLETED SUCCESSFULLY"

Security Note: Avoid hardcoding database passwords directly inside shell scripts. Instead, use environment variables for better security.

Example:

export MYSQL_PASSWORD="yourpassword"

Step 2: Give Execute Permission

chmod +x disaster-recovery-backup.sh

Step 3: Run Backup Manually

./disaster-recovery-backup.sh

If everything works correctly, Linux will automatically generate timestamp-based backup archives.

Step 4: Verify Backup Files

cd /backup/disaster-recovery
ls -lh

You should see generated archives like:

backup-2026-05-10-02-00-00.tar.gz

Timestamp-based naming helps DevOps engineers quickly identify the latest stable backup during disaster recovery situations.


Automate Backup Using Cronjob

Cronjobs allow Linux to run scheduled tasks automatically.

Open cron editor:

crontab -e

Run backup every day at 2 AM:

0 2 * * * /home/ubuntu/disaster-recovery-backup.sh

Run every Sunday:

0 1 * * 0 /home/ubuntu/disaster-recovery-backup.sh

View cronjobs:

crontab -l

Cronjob Timing Basics

Cronjobs use five fields:

* * * * *
│ │ │ │ │
│ │ │ │ └── Day of week
│ │ │ └──── Month
│ │ └────── Day
│ └──────── Hour
└────────── Minute

Examples

0 2 * * * → Daily at 2 AM
*/10 * * * * → Every 10 minutes
0 0 * * 0 → Every Sunday midnight
0 */6 * * * → Every 6 hours

We will explain cronjobs in full detail in a separate guide.


Restore Process

Backups are useless if you do not know how to restore them.

Restore Nginx

sudo tar -xzvf nginx.tar.gz -C /

Restore MySQL

mysql -u root -p < mysql.sql

Restore PostgreSQL

psql -U postgres < postgres.sql

Restore Redis

sudo cp redis.rdb /var/lib/redis/dump.rdb

Frequently Asked Questions

1. How often should I take backups?

Daily backups are recommended for production systems. Critical systems may require hourly backups.

2. Should backups stay on the same server?

No. Always store backups on remote storage or another server.

3. Why backup SSL certificates?

Without SSL certificates, HTTPS websites stop working and users may see browser warnings.


Conclusion

Disaster recovery is not just about databases. A production-ready backup strategy includes Nginx, systemd files, SSH, SSL certificates, environment variables, and databases.

In DevOps, automation saves time and reduces human mistakes. Using shell scripts with cronjobs ensures backups happen consistently without manual intervention.

Remember this simple rule: if rebuilding your server would take hours without a file, that file should probably be backed up.

f X W