mgob
mgob copied to clipboard
restore does not seem to work
i created a script which browses the storage and creates a menu to easily select which backup to restore... then it take the selected backup and does the "curl -X POST" with the desired gz file...
time="2024-06-07T16:06:49Z" level=info msg="mgob dev.275"
time="2024-06-07T16:06:49Z" level=info msg="starting with config: &{LogLevel:info JSONLog:false Host: Port:8090 ConfigPath:/config StoragePath:/storage TmpPath:/tmp DataPath:/data Version:dev.275 UseAwsCli:false HasGpg:false}"
time="2024-06-07T16:06:49Z" level=info msg="mongodump version: 100.8.0 git version: 732ddfaa6b467ffcd5bfa69a455953320eed85f4 Go version: go1.21.1 os: linux arch: amd64 compiler: gc "
time="2024-06-07T16:06:49Z" level=info msg="NAME: mc version - manage bucket versioning USAGE: mc version COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: enable enable bucket versioning suspend suspend bucket versioning info show bucket versioning status FLAGS: --config-dir value, -C value path to configuration folder (default: \"/root/.mc\") [$MC_CONFIG_DIR] --quiet, -q disable progress bar display [$MC_QUIET] --no-color disable color theme [$MC_NO_COLOR] --json enable JSON lines formatted output [$MC_JSON] --debug enable debug output [$MC_DEBUG] --insecure disable SSL certificate verification [$MC_INSECURE] --limit-upload value limits uploads to a maximum rate in KiB/s, MiB/s, GiB/s. (default: unlimited) [$MC_LIMIT_UPLOAD] --limit-download value limits downloads to a maximum rate in KiB/s, MiB/s, GiB/s. (default: unlimited) [$MC_LIMIT_DOWNLOAD] --help, -h show help "
time="2024-06-07T16:06:50Z" level=info msg="aws-cli/1.29.44 Python/3.11.8 Linux/4.18.0-477.15.1.el8_8.x86_64 botocore/1.31.44 "
time="2024-06-07T16:06:50Z" level=info msg="gpg (GnuPG) 2.4.4 libgcrypt 1.10.2 Copyright (C) 2024 g10 Code GmbH License GNU GPL-3.0-or-later <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Home: /root/.gnupg Supported algorithms: Pubkey: RSA, ELG, DSA, ECDH, ECDSA, EDDSA Cipher: IDEA, 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH, CAMELLIA128, CAMELLIA192, CAMELLIA256 Hash: SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224 Compression: Uncompressed, ZIP, ZLIB, BZIP2 "
time="2024-06-07T16:06:52Z" level=info msg="Google Cloud SDK 445.0.0 bq 2.0.97 bundled-python3-unix 3.9.16 core 2023.09.01 gcloud-crc32c 1.0.0 gsutil 5.25 "
time="2024-06-07T16:06:53Z" level=info msg="WARNING: You have 2 update(s) available. Consider updating your CLI installation with 'az upgrade' azure-cli 2.52.0 * "
time="2024-06-07T16:06:53Z" level=info msg="rclone v1.66.0 "
time="2024-06-07T16:06:53Z" level=info msg="Next tmp cleanup run at 2024-06-07 17:00:00 +0000 UTC"
time="2024-06-07T16:06:53Z" level=info msg="Next run at 2024-06-07 17:00:00 +0000 UTC" plan=hourly
time="2024-06-07T16:06:53Z" level=info msg="Next run at 2024-06-08 00:00:00 +0000 UTC" plan=daily
time="2024-06-07T16:06:53Z" level=info msg="Next run at 2024-06-09 00:00:00 +0000 UTC" plan=weekly
time="2024-06-07T16:06:53Z" level=info msg="Next run at 2025-02-03 00:00:00 +0000 UTC" plan=ondemand
time="2024-06-07T16:06:53Z" level=info msg="Starting HTTP server on port 8090"
time="2024-06-07T16:07:49Z" level=info msg="On demand backup started" plan=hourly
time="2024-06-07T16:07:50Z" level=info msg="Validation: restore backup with : mongorestore --archive=/tmp/hourly-1717776469.gz --gzip --host 127.0.0.1 --port 27017 --nsInclude e4t-lab-ha.* --nsFrom e4t-lab-ha.* --nsTo e4t-lab-ha-hourly.* " plan=hourly
time="2024-06-07T16:07:50Z" level=info msg="Validation: connect to mongodb://127.0.0.1:27017"
time="2024-06-07T16:07:50Z" level=info msg="Validation: collection names companies,files,auth_transfer_requests,devices,users,licenses,download_requests,license_requests,templates,tasks,wikis,guests"
time="2024-06-07T16:07:50Z" level=info msg="new dump" archive=/tmp/hourly-1717776469.gz err="<nil>" mlog=/tmp/hourly-1717776469.log plan=hourly
time="2024-06-07T16:07:50Z" level=info msg="Local backup finished filename:`/tmp/hourly-1717776469.gz`, filepath:`/storage/hourly/hourly-1717776469.gz`, Duration: 14.858378ms" plan=hourly
time="2024-06-07T16:07:50Z" level=info msg="Clean up temp finished Temp folder cleanup finished, `/tmp/hourly-1717776469.gz` is removed." plan=hourly
time="2024-06-07T16:07:50Z" level=info msg="On demand backup finished in 1.043913014s archive hourly-1717776469.gz size 632 kB" plan=hourly
time="2024-06-07T16:49:55Z" level=info msg="On demand restore started from /storage/hourly/hourly-1717761600.gz" plan=hourly
time="2024-06-07T16:49:55Z" level=info msg="Running restore command with : mongorestore --archive=/storage/hourly/hourly-1717761600.gz --gzip --host common-mongodb-headless.lab.svc.cluster.local --port 27017 -u \"USER\" -p \"PASS\" --nsInclude e4t-lab-ha.* " plan=hourly
time="2024-06-07T16:49:55Z" level=info msg="Validation: restore backup with : mongorestore --archive=/storage/hourly/hourly-1717761600.gz --gzip --host 127.0.0.1 --port 27017 --nsInclude e4t-lab-ha.* --nsFrom e4t-lab-ha.* --nsTo e4t-lab-ha-hourly.* " plan=hourly
time="2024-06-07T16:49:56Z" level=info msg="Validation: connect to mongodb://127.0.0.1:27017"
time="2024-06-07T16:49:56Z" level=info msg="On demand restore finished in 610.677032ms, restore from hourly-1717761600.gz size 597 kB" plan=hourly
what's happening? i've a mongodb with 1 primary, 1 secondary and 1 hidden, i kept them as they are, without bringing any of them down...
also, taking that mongorestore line it shows in logs an using it in cli from the mgob container in the pod, brings to:
error connecting to host: failed to connect to mongodb://common-mongodb-headless.lab.svc.cluster.local:27017/: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-SHA-1": (AuthenticationFailed) Authentication failed.
user and passwords are correct as it gave me that line, and it uses the same to do the backups, which we tested in past days...
if anybody interested in my script, here it is (requires you open a portforward to the svc on port 8090):
#!/bin/bash
# Base URL
BASE_URL="http://localhost:8090"
STORAGE_URL="$BASE_URL/storage"
RESTORE_URL="$BASE_URL/restore"
# Check if STORAGE_URL is reachable
if ! curl -sL -o /dev/null -w "%{http_code}" "$STORAGE_URL" | grep -q "200"; then
echo "Error: $STORAGE_URL is not reachable or did not return a 200 status code."
exit 1
fi
# Function to fetch directories from the storage
fetch_directories() {
curl -sL "$STORAGE_URL" | awk -F'[<>"]' '/<a href=/{print $5}' | sed -e '/^$/d' -e 's:/$::'
}
# Function to fetch .gz files from the selected directory
fetch_files() {
local directory=$1
curl -sL "$STORAGE_URL/$directory" | awk -F'[<>"]' '/<a href=/{print $5}' | grep '\.gz$' | sed '/^$/d'
}
# Function to convert epoch timestamp to date and time
convert_epoch() {
date -r "$1" +"%Y-%m-%d_%H:%M:%S"
}
# Fetch directories and prepare dialog input
directories=$(fetch_directories)
dialog_input=""
counter=1
# Prepare input for dialog
while IFS= read -r line; do
dialog_input+="$counter $line "
counter=$((counter + 1))
done <<< "$directories"
# Create dialog menu for directory selection
selected_directory=$(dialog --stdout --menu "Select a directory:" 0 0 0 $dialog_input)
# Check if a directory was selected
if [ -z "$selected_directory" ]; then
echo "No directory selected."
exit 1
fi
# Get the selected directory name
directory_name=$(echo "$directories" | sed -n "${selected_directory}p")
# Fetch .gz files from the selected directory
files=$(fetch_files "$directory_name")
# Prepare dialog input for files
dialog_input=""
counter=1
file_list=()
while IFS= read -r file; do
filename=$(basename "$file" .gz)
timestamp=$(echo "$filename" | awk -F'-' '{print $NF}' | sed 's/\.gz//')
human_readable_timestamp=$(convert_epoch "$timestamp")
dialog_input+="$counter \"$human_readable_timestamp\" "
file_list+=("$file")
counter=$((counter + 1))
done <<< "$files"
# Create dialog menu for file selection
selected_file_index=$(dialog --stdout --menu "Files in $directory_name:" 0 0 0 $dialog_input)
# Check if a file was selected
if [ -z "$selected_file_index" ]; then
echo "No file selected."
exit 1
fi
# Get the selected file name
selected_file="${file_list[$((selected_file_index - 1))]}"
# Print the selected directory and file
echo "Selected directory: $directory_name"
echo "Selected file: $selected_file"
# Construct the restore URL
restore_url="$RESTORE_URL/$directory_name/$selected_file"
# Print the restore URL
echo "Restore URL: $restore_url"
# Ask user for confirmation to proceed
read -p "Are you sure you want to proceed with the POST request to this URL? (y/n): " confirmation
# Proceed if user confirms
if [ "$confirmation" = "y" ]; then
curl -X POST "$restore_url"
echo "POST request sent to $restore_url"
else
echo "Operation cancelled."
fi
some screenshots:
I am not sure. It did pass the simple test here.
https://github.com/maxisam/mgob/blob/86810b8b6dbbc6ae684ca44592a2f23ecc1c1f45/.github/workflows/build.yml#L167-L190
@maxisam i know, if you take a look at my logs you'll see it passed the test, and restored it in the mongodb test container... it's the restore on the actual REAL mongodb that never happens...
for now i created my own scripts that scale down all backend deployments to 0, saving the actual replica count for each of them... then checks which is the primary and secondary mongodb pod, and freezes the secondary... then does the restore pointing the svc which now has only the primary active, to avoid any kind of conflicts... if restore is ok, it then unfreeze the secondary and scales back the backend deployments to their original counts, this works good, so i just use mgob for the backups, not for the actual restores...