docker-db-backup
docker-db-backup copied to clipboard
Add script for restore
name: Feature Request about: Suggest an idea for this project
Description of the feature Just as this project contains great script for running the backup, it will be also great if a script for restore is also included. It should cover:
- restore specific backup file
- use the same environment variables as for backup to determine database type and connectivity
- is backup is stored to S3 download and verify the md5 first
- if compressed, uncompressed depending on the file extension
- restore to the database
Benefits of feature Having a ready to go restore script will be really easy and generic to run, in stead of running all database specific restore commands.
Please consider this, it will be very practical.
Maybe also a list script of available backups on the s3 bucked will also be useful.
I've added the basics of a restore script in version 3.0.0 However S3 restore is not integrated. If there is some support I can add onto it.
This is great! Thank you very much!
Additional S3 list, download and verify options will also be very useful. The aws cli supports all the commands you need like ls to list and cp for downloading as well. See https://gitlab.com/-/snippets/1967872
Successful and hash check is also very important, before restore.
I have written my own very basic restore from S3 script that:
- accepts as a parameter just the name of the file to restore
- downloads the file from S3, using the container's env variables
- restores the file to the database, using @tiredofit's script and the container's env variables
#! /bin/bash
# restore db
# Syntax: db-restore.sh filename.sql.xz
if [ -z "$1" ]; then
echo "Syntax: db-restore.sh filename.sql.xz"
exit 1
fi
docker exec -e RESTORE_FILE="$1" docker-db-backup-container bash -c '
export AWS_ACCESS_KEY_ID="${S3_KEY_ID:-$(cat ${S3_KEY_ID_FILE})}" AWS_SECRET_ACCESS_KEY="${S3_KEY_SECRET:-$(cat ${S3_KEY_SECRET_FILE})}" AWS_DEFAULT_REGION="${S3_REGION}";
aws --endpoint-url "https://${S3_HOST}" s3 cp "s3://${S3_BUCKET}/${S3_PATH}/${RESTORE_FILE}" "/backup/${RESTORE_FILE}" ${s3_ssl} ${s3_ca_cert} ${S3_EXTRA_OPTS};
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION;
restore "/backup/${RESTORE_FILE}" "${DB_TYPE}" "${DB_HOST}" "${DB_NAME}" "${DB_USER}" "${DB_PASS:-$(cat ${DB_PASS_FILE})}" "${DB_PORT}"'
What is missing:
- hash check, as @gpetrov suggested. Might add it in the future...
- clean up of the locally downloaded backup file after successful restore. Might also add it in the future, but not a big deal for me since I'm dealing with a very small DB that I'm not hoping to restore it every so often.
- interactive mode to list and select the available backup files. Too complex for my use case: I will just list backup files externally, then copy and paste the file name I want to restore onto the command line
It is working for me, but I have not thoroughly tested it playing with all parameters and env variables.
I hope it helps, though I understand it may take a bit more to integrate the S3 download with the rest of the restore script, performing the download only if [ $BACKUP_LOCATION = "S3" ] and checking for possible error conditions that I haven't taken care of.