*Cube-Host– full cloud services!!
Automated backups are the baseline for running any production workload on a Linux VPS. Even a stable server can suffer from accidental deletion, filesystem corruption, failed updates, or a compromised account.
This guide shows a practical backup setup: what to include, how to create daily archives, how to keep offsite copies, and how to verify restores. If you’re hosting real projects, start with a reliable Linux VPS with enough storage and consistent disk performance to run backups without impacting users.
Before writing any scripts, define a simple strategy. This improves reliability and also makes troubleshooting much easier later.
For most websites and applications, the “must-have” backup set looks like this:
/var/www/)/etc/)/home/)/etc/letsencrypt/ or your web server config paths)Tip: If your VPS is used for containers (Docker), you usually want to back up persistent volumes and app configs, not container layers.
Create a dedicated backup folder with strict permissions. This prevents other users/processes from reading sensitive backups.
sudo mkdir -p /backup
sudo chown root:root /backup
sudo chmod 700 /backup
Important: Make sure your backup path is not inside your website directory (for example, not under /var/www), otherwise you may accidentally expose archives via HTTP.
A daily compressed archive is a good baseline. Use exclusions to avoid backing up virtual filesystems and to prevent recursive self-backups.
sudo tar -czf /backup/backup-$(date +%F).tar.gz \
--one-file-system \
--exclude=/backup \
--exclude=/proc --exclude=/sys --exclude=/dev --exclude=/run \
/var/www /etc /home
If your VPS is busy, you can lower backup impact using CPU and I/O priority:
sudo nice -n 19 ionice -c3 tar -czf /backup/backup-$(date +%F).tar.gz \
--one-file-system \
--exclude=/backup \
--exclude=/proc --exclude=/sys --exclude=/dev --exclude=/run \
/var/www /etc /home
-c — create archive-z — gzip compression-f — output file--one-file-system — don’t cross mount points (helps avoid surprises)Files alone are not enough for most production servers. Add database dumps to your backup routine (preferably stored alongside your archives).
sudo mysqldump --all-databases --single-transaction --routines --events \
| gzip > /backup/mysql-all-$(date +%F).sql.gz
Tip: For automation, configure credentials safely (for example in /root/.my.cnf) instead of putting passwords in cron.
sudo -u postgres pg_dumpall | gzip > /backup/postgres-all-$(date +%F).sql.gz
Always verify that files exist and are readable. A zero-byte archive or a broken dump can happen silently if you never check.
ls -lh /backup
tar -tzf /backup/backup-$(date +%F).tar.gz | head
For extra confidence, generate checksums (useful for offsite transfers):
cd /backup
sha256sum backup-$(date +%F).tar.gz > backup-$(date +%F).tar.gz.sha256
Instead of complex one-liners in cron, put your logic into a script. This makes logging, retention, and offsite sync much cleaner.
sudo nano /usr/local/sbin/backup-vps.sh
Example script (edit paths to match your server):
#!/usr/bin/env bash
set -euo pipefail
BACKUP_DIR="/backup"
DATE="$(date +%F)"
HOST="$(hostname -s)"
RETENTION_DAYS="7"
ARCHIVE="${BACKUP_DIR}/${HOST}-files-${DATE}.tar.gz"
MYSQL_DUMP="${BACKUP_DIR}/${HOST}-mysql-${DATE}.sql.gz"
PG_DUMP="${BACKUP_DIR}/${HOST}-postgres-${DATE}.sql.gz"
mkdir -p "${BACKUP_DIR}"
chmod 700 "${BACKUP_DIR}"
# 1) Files archive (adjust folders if needed)
nice -n 19 ionice -c3 tar -czf "${ARCHIVE}" \
--one-file-system \
--exclude=/backup \
--exclude=/proc --exclude=/sys --exclude=/dev --exclude=/run \
/var/www /etc /home
# 2) MySQL/MariaDB dump (optional: remove if you don't use MySQL)
if command -v mysqldump >/dev/null 2>&1; then
mysqldump --all-databases --single-transaction --routines --events \
| gzip > "${MYSQL_DUMP}" || true
fi
# 3) PostgreSQL dump (optional: remove if you don't use PostgreSQL)
if command -v pg_dumpall >/dev/null 2>&1 && id postgres >/dev/null 2>&1; then
sudo -u postgres pg_dumpall | gzip > "${PG_DUMP}" || true
fi
# 4) Checksums (helpful for integrity checks after transfers)
cd "${BACKUP_DIR}"
sha256sum "$(basename "${ARCHIVE}")" > "$(basename "${ARCHIVE}").sha256"
# 5) Retention cleanup
find "${BACKUP_DIR}" -type f -name "${HOST}-files-*.tar.gz" -mtime +"${RETENTION_DAYS}" -delete
find "${BACKUP_DIR}" -type f -name "${HOST}-mysql-*.sql.gz" -mtime +"${RETENTION_DAYS}" -delete
find "${BACKUP_DIR}" -type f -name "${HOST}-postgres-*.sql.gz" -mtime +"${RETENTION_DAYS}" -delete
find "${BACKUP_DIR}" -type f -name "${HOST}-files-*.tar.gz.sha256" -mtime +"${RETENTION_DAYS}" -delete
echo "Backup completed: ${DATE}"
Make it executable:
sudo chmod +x /usr/local/sbin/backup-vps.sh
Schedule the script to run daily (example: 03:00). Edit root crontab:
sudo crontab -e
Add the line below:
0 3 * * * /usr/local/sbin/backup-vps.sh >> /var/log/backup-vps.log 2>&1
Tip: Check logs after the first run: tail -n 50 /var/log/backup-vps.log
Do not keep backups only on the same VPS. A basic and reliable approach is copying backups to a second server using rsync over SSH.
sudo ssh-keygen -t ed25519 -a 64 -f /root/.ssh/id_ed25519 -N ""
sudo ssh-copy-id backupuser@remote-server
sudo rsync -az /backup/ backupuser@remote-server:/remote-backup/$(hostname -s)/
If you want this automated, schedule it after the backup (example: 03:30):
30 3 * * * rsync -az /backup/ backupuser@remote-server:/remote-backup/$(hostname -s)/ >> /var/log/backup-vps-rsync.log 2>&1
At least occasionally, do a test restore into a temporary directory. This is the fastest way to catch permission issues, missing paths, or broken archives.
sudo mkdir -p /tmp/restore-test
sudo tar -xzf /backup/$(hostname -s)-files-$(date +%F).tar.gz -C /tmp/restore-test
ls -la /tmp/restore-test | head
And verify checksums if you use them:
cd /backup
sha256sum -c $(hostname -s)-files-$(date +%F).tar.gz.sha256
/backup private: chmod 700 and root-owned.If backups take too long, CPU usage spikes during compression, or disk I/O becomes a bottleneck, you’ll get more predictable automation on a stronger Linux VPS with better storage performance and additional resources—especially if you run heavy databases or multiple websites.
For most sites, daily backups are the minimum. If data changes frequently (orders, messages, uploads), consider more frequent database dumps (e.g., every 1–6 hours) with a longer retention policy.
No. Snapshots can be useful, but they’re not a full backup strategy. You still want independent, offsite copies and restore testing.
A simple and effective approach is 7 daily backups plus weekly/monthly archives for longer retention, depending on your business needs and storage budget.
Keeping backups on the same VPS only, and never testing restores. Offsite + restore testing is what turns “backups” into real recovery.
Want predictable backup jobs, stable performance, and enough headroom for archives and database dumps? Start with a reliable Linux VPS and automate daily backups + offsite sync from day one.