How to Automate Backups on a Dedicated Server
Using Rsync and Cron

Losing data on a dedicated server can be catastrophic — whether it's from accidental deletion, hardware failure, or a misconfigured deployment. A solid automated backup strategy is one of the most important things you can set up after provisioning a new server. In this tutorial, you'll learn how to build a complete, production-ready automated backup system.

We'll use two powerful built-in Linux tools: rsync (for efficient, incremental file transfers) and cron (for scheduled task automation). We'll also cover SSH key authentication, backup rotation, logging, and email alerts.

ℹ️ Prerequisites: You'll need root or sudo access to a Linux dedicated server, a backup destination server or storage location, and basic familiarity with the Linux command line.

Step 1: Install & Verify rsync

Most modern Linux distributions come with rsync pre-installed. Let's verify and install if needed.

Check if rsync is installed:

Terminal
rsync --version

Ubuntu / Debian:

bash
sudo apt update
sudo apt install rsync -y

CentOS / AlmaLinux / Rocky Linux:

bash
sudo dnf install rsync -y

Also make sure rsync is installed on your backup destination server using the same command.

Step 2: Set Up SSH Key Authentication

Automated backups need to connect to the backup server without prompting for a password. We'll create a dedicated SSH key pair for the backup user.

Generate a dedicated SSH key pair (on the source server):

Source Server
# Generate a new ed25519 key pair (no passphrase for automation)
ssh-keygen -t ed25519 -C "backup@sourceserver" -f ~/.ssh/backup_key -N ""

# View the public key to copy it
cat ~/.ssh/backup_key.pub

Copy the public key to the backup/destination server:

Source Server
ssh-copy-id -i ~/.ssh/backup_key.pub backupuser@BACKUP_SERVER_IP

Replace backupuser with the user on your destination server and BACKUP_SERVER_IP with its IP address.

Add the key manually (if ssh-copy-id is unavailable):

Destination Server
# On the destination server, as backupuser:
mkdir -p ~/.ssh
chmod 700 ~/.ssh
echo "YOUR_PUBLIC_KEY_HERE" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Test the passwordless connection:

Source Server
ssh -i ~/.ssh/backup_key backupuser@BACKUP_SERVER_IP "echo 'SSH OK'"

You should see SSH OK with no password prompt.

⚠️ Security Note: Since this key has no passphrase, protect the private key file. Set permissions with: chmod 600 ~/.ssh/backup_key — and never share or expose this file.

Step 3: Test Your First rsync Transfer

Before writing scripts, let's understand the key rsync flags and test a manual transfer.

Flag Meaning
-aArchive mode: preserves permissions, timestamps, symlinks, owner, group
-vVerbose: shows file transfer details
-zCompress data during transfer (reduces bandwidth)
--deleteDelete files on destination that no longer exist on source
--excludeSkip specific files or directories
-eSpecify remote shell (used to pass SSH key)
--progressShow real-time transfer progress (for manual testing)
--dry-runSimulate the transfer without making changes

Run a dry-run test:

Source Server
rsync -avz --dry-run \
-e "ssh -i ~/.ssh/backup_key -o StrictHostKeyChecking=no" \
/var/www/html/ \
backupuser@BACKUP_SERVER_IP:/backups/www-html/

💡 Trailing Slashes Matter! /var/www/html/ (with trailing slash) syncs the contents of the folder. /var/www/html (no trailing slash) syncs the folder itself including its directory name. Be consistent in your scripts.

Step 4: Write the Backup Shell Script

Now let's create a reusable, configurable backup script at /usr/local/bin/server-backup.sh

/usr/local/bin/server-backup.sh
#!/bin/bash
# ============================================================
# FitServers Automated Backup Script v1.0
# ============================================================

# ── CONFIGURATION ───────────────────────────────────────────
BACKUP_USER="backupuser"
BACKUP_HOST="BACKUP_SERVER_IP"
SSH_KEY="/root/.ssh/backup_key"
REMOTE_BASE="/backups/$(hostname)"
LOG_FILE="/var/log/backup.log"
RETENTION_DAYS=14 # Keep backups for 14 days

# Directories to back up
BACKUP_SOURCES=(
"/var/www"
"/etc"
"/home"
"/root"
)

# Paths to exclude
EXCLUDES=(
"*.log"
"*.tmp"
".cache"
"node_modules"
"__pycache__"
)

# ── SETUP ───────────────────────────────────────────────────
DATE=$(date +%Y-%m-%d)
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
REMOTE_DIR="${REMOTE_BASE}/${DATE}"

# Build exclude flags
EXCLUDE_FLAGS=""
for pattern in "${EXCLUDES[@]}"; do
EXCLUDE_FLAGS+=" --exclude='${pattern}'"
done

# ── LOGGING HELPER ──────────────────────────────────────────
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "${LOG_FILE}"
}

# ── BEGIN BACKUP ────────────────────────────────────────────
log "Starting backup to ${BACKUP_HOST}:${REMOTE_DIR}"
TOTAL_ERRORS=0

for SOURCE in "${BACKUP_SOURCES[@]}"; do
if [ ! -d "${SOURCE}" ]; then
log "SKIP: ${SOURCE} does not exist"
continue
fi

DEST_NAME=$(echo "${SOURCE}" | sed 's|/|_|g' | sed 's|^_||')
REMOTE_PATH="${REMOTE_DIR}/${DEST_NAME}"
log "Backing up: ${SOURCE} → ${REMOTE_PATH}"

rsync -az --delete \
-e "ssh -i ${SSH_KEY} -o StrictHostKeyChecking=no -o ConnectTimeout=30" \
$EXCLUDE_FLAGS \
"${SOURCE}/" \
"${BACKUP_USER}@${BACKUP_HOST}:${REMOTE_PATH}/" \
>> "${LOG_FILE}" 2>&1

EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
log "SUCCESS: ${SOURCE}"
else
log "ERROR: ${SOURCE} failed with exit code ${EXIT_CODE}"
TOTAL_ERRORS=$(( TOTAL_ERRORS + 1 ))
fi
done

log "Backup complete. Errors: ${TOTAL_ERRORS}"
exit $TOTAL_ERRORS

Make the script executable:

Terminal
chmod +x /usr/local/bin/server-backup.sh

Step 5: Add Backup Rotation Logic

Without rotation, backups will fill up your storage within days. Add this block to the end of your server-backup.sh script before the final exit line:

Append to server-backup.sh
# ── BACKUP ROTATION ─────────────────────────────────────────
log "Running backup rotation (keeping last ${RETENTION_DAYS} days)..."

ssh -i "${SSH_KEY}" \
-o StrictHostKeyChecking=no \
"${BACKUP_USER}@${BACKUP_HOST}" \
"find ${REMOTE_BASE} -maxdepth 1 -type d -name '????-??-??' \
-mtime +${RETENTION_DAYS} -exec rm -rf {} \; 2>/dev/null"

if [ $? -eq 0 ]; then
log "Rotation complete: old backups removed."
else
log "WARNING: Rotation may have encountered issues."
fi

💡 Rotation Strategy Tip: RETENTION_DAYS=14 keeps daily backups for 2 weeks. For a more advanced strategy, consider keeping 7 daily + 4 weekly + 12 monthly backups — known as the Grandfather-Father-Son (GFS) rotation scheme.

Step 6: Schedule Backups with Cron

Schedule our backup script to run automatically using cron — set to run daily at 2:00 AM, a typical low-traffic window.

Open the root crontab:

Terminal
crontab -e

Add your backup schedule:

crontab
# Format: minute hour day month weekday command
# Run daily at 2:00 AM
0 2 * * * /usr/local/bin/server-backup.sh >> /var/log/backup.log 2>&1

Common Cron Schedule Reference

Schedule Cron Expression
Every day at 2:00 AM0 2 * * *
Every 6 hours0 */6 * * *
Every Sunday at 3:00 AM0 3 * * 0
1st of every month at midnight0 0 1 * *
Weekdays only at 1:30 AM30 1 * * 1-5

Verify your crontab entry:

Terminal
crontab -l

Step 7: Enable Logging

Our script already appends to /var/log/backup.log. Let's create the log file and set up log rotation.

Create the log file:

Terminal
touch /var/log/backup.log
chmod 640 /var/log/backup.log

Set up logrotate:

/etc/logrotate.d/server-backup
/var/log/backup.log {
weekly
rotate 8
compress
delaycompress
missingok
notifempty
create 640 root root
}

View recent backup logs:

Terminal
# View last 50 lines
tail -n 50 /var/log/backup.log

# Live-stream the log
tail -f /var/log/backup.log

# Search for errors only
grep "ERROR" /var/log/backup.log

Step 8: Set Up Email Alerts

You need to know when a backup fails — not find out days later. We'll configure the script to send email alerts on failure.

Install mailutils:

Ubuntu / Debian
sudo apt install mailutils -y

Add to the configuration section of server-backup.sh:

server-backup.sh — config section
ALERT_EMAIL="admin@yourdomain.com"
HOSTNAME=$(hostname)

Replace the final exit line with this block:

server-backup.sh — end of script
# ── SEND ALERT IF ERRORS ────────────────────────────────────
if [ $TOTAL_ERRORS -gt 0 ]; then
SUBJECT="[BACKUP FAILED] ${HOSTNAME} — ${TIMESTAMP}"
BODY="Backup on ${HOSTNAME} completed with ${TOTAL_ERRORS} error(s).\n\nLast 20 lines:\n$(tail -n 20 ${LOG_FILE})"
echo -e "${BODY}" | mail -s "${SUBJECT}" "${ALERT_EMAIL}"
log "Alert email sent to ${ALERT_EMAIL}"
exit 1
fi

log "All backups completed successfully."
exit 0

⚠️ Mail Delivery: For reliable email delivery, configure your server's MTA (Postfix or msmtp) with an authenticated SMTP relay such as SendGrid, Mailgun, or Amazon SES. Plain sendmail is often blocked by ISPs or marked as spam.

Step 9: Test & Verify Your Backup

Never assume a backup works — verify it. Run the script manually and confirm the files appear on the destination.

Run the script manually:

Terminal
bash /usr/local/bin/server-backup.sh

Check the log output:

Terminal
cat /var/log/backup.log

Verify files on the destination server:

Destination Server
# List the backup directories created today
ls -la /backups/$(hostname)/$(date +%Y-%m-%d)/

# Check total size
du -sh /backups/$(hostname)/$(date +%Y-%m-%d)/

Perform a test restore:

Source Server (restore test)
# Restore /etc from backup to a temporary location
rsync -avz \
-e "ssh -i ~/.ssh/backup_key" \
backupuser@BACKUP_SERVER_IP:/backups/HOSTNAME/YYYY-MM-DD/etc/ \
/tmp/restore-test/

Step 10: Hardening & Best Practices

1. Use a restricted backup user on the destination

Destination Server
useradd -m -s /bin/bash backupuser
mkdir -p /backups
chown backupuser:backupuser /backups

2. Restrict the SSH key to rsync commands only

In the destination server's ~/.ssh/authorized_keys, prefix the key:

~/.ssh/authorized_keys on destination
command="rsync --server --daemon .",no-port-forwarding,no-X11-forwarding,no-agent-forwarding ssh-ed25519 AAAA...

3. Verify backup integrity with checksums

Terminal
# Generate checksums before backup
find /var/www -type f | xargs md5sum > /root/checksums-before.txt

# After restore, compare
md5sum -c /root/checksums-before.txt

4. Follow the 3-2-1 backup rule

  • 3 copies of your data
  • 2 different storage types (e.g., local disk + remote server)
  • 1 offsite copy (e.g., a FitServers server in a different datacenter)

5. Monitor disk usage on the backup server

Destination Server crontab
# Add to crontab — alert if disk > 80%
0 6 * * * df -h /backups | awk 'NR==2{print $5}' | grep -q "^[89][0-9]%" && echo "Backup disk over 80% full" | mail -s "Disk Warning" admin@yourdomain.com

🚨 Don't Store Backups on the Same Server: A backup stored on the same physical server as your data provides zero protection against hardware failure, ransomware, or datacenter incidents. Always back up to a separate server or storage destination.

Quick Reference Summary

Component Location / Command Purpose
Backup script/usr/local/bin/server-backup.shMain rsync backup logic
Log file/var/log/backup.logTimestamped backup output
Cron schedulecrontab -eDaily 2AM automated trigger
Log rotation/etc/logrotate.d/server-backupWeekly log compression & rotation
SSH key/root/.ssh/backup_keyPasswordless authentication
RetentionRETENTION_DAYS=14 in scriptAuto-delete backups older than N days

Conclusion

You now have a fully automated, incremental backup system running on your dedicated server. Your setup includes:

  • A robust rsync script with configurable sources and exclusions
  • Passwordless SSH authentication for secure automated transfers
  • Automatic backup rotation to manage storage
  • A cron schedule for hands-off daily execution
  • Structured logging with automatic rotation
  • Email alerts so failures never go unnoticed

The key next step is to test a full restore from your backup at least once to confirm the process works end-to-end before you ever need it in a real emergency.

With this foundation in place, you can extend the system further — adding database dumps (MySQL/PostgreSQL), backing up Docker volumes, or integrating with monitoring tools like Grafana or Prometheus.

Discover fitservers Dedicated Server Locations

fitservers servers are available around the world, providing diverse options for hosting websites. Each region offers unique advantages, making it easier to choose a location that best suits your specific hosting needs.