Auto docker volume backups
As a user I want Ironmount to be capable of auto-discovering the docker volumes on my host. I want to be able to use them as backup source (volume) and ironmount would (optionally) take care of stopping all containers using this volume before backing up and starting them right after the backup is complete
some of us dont use docker volume bindings. we mount them to a directory directly. this was a huge challenge for me since most docker backup solutions dont support it. and they dont backup compose file with the backup either
maybe ability to stop the containers before backup jobs would solve the "directory mount" issue. OR maybe running a custom script to stop specific containers pre backup job would also also solve issue
I strongly agree with these requests. Going with your mentality, I strongly suggest implementing a way of running a pre- and post-backup script. For maximum ease of use, automatically stopping and restarting containers/stacks after the backup would be a godsent.
+1, also do not use volumes, but bind mounts.
Afaik there is no way to tell from a given folder if it has been bind mounted in a container or not. Or maybe I get something wrong
Backing up bind mounts is a matter of stopping containers and making a tar of your folders. It’s much simpler then docker volumes. You could mount the bind folder into Ironmount and have it backup as usual, the functionality needed would then be an option to stop specific (or all) containers before running the backups. Maybe include a delay to make sure database containers are fully stopped.
Maybe I'm missing the point of OP here. I was interested in the option to mount my stacks-folder to ironmount (in my case /opt) and then ironmount is able to automatically detect all my docker-compose stacks within the stacks-folder. When doing a backup of this folder, it would ideally stop each container, then running a backup of this container's mounts/volumes before restarting the container.
Currently, I'm running this cronjob to stop all my docker containers, copying the /opt folder (containing all my stacks and all mounts) to /opt_copy and then restarting my docker services again:
#!/bin/bash
set -e
LOGTAG="[backup_opt]"
echo "$LOGTAG ---- $(date) ----"
# File where running containers are stored temporarily
STATE_FILE="/run/backup_opt_running_containers"
# Detect currently running Docker containers
running_containers=$(docker ps -q)
if [ -n "$running_containers" ]; then
echo "$LOGTAG Running containers: $running_containers"
echo "$running_containers" > "$STATE_FILE"
docker stop $running_containers
else
echo "$LOGTAG No running containers found."
: > "$STATE_FILE"
fi
# Ensure target directory exists
mkdir -p /opt_copy
echo "$LOGTAG Starting rsync..."
rsync -a --delete /opt/ /opt_copy/
echo "$LOGTAG rsync finished."
# Restart containers that were running earlier
if [ -s "$STATE_FILE" ]; then
echo "$LOGTAG Restarting previously running containers..."
while read -r cid; do
[ -n "$cid" ] && docker start "$cid"
done < "$STATE_FILE"
echo "$LOGTAG Container restart completed."
else
echo "$LOGTAG No containers were active before."
fi
echo "$LOGTAG Done."
After this job is run, ironmount does a backup of the /opt_copy folder to my local and remote repositories. This makes sure that all my docker-services are shutdown at the time of the backup.
Let's say we add in the backup schedule settings a list of container with checkboxes + a custom input. And you would select which containers will be stopped before and started back after the backup. Would this be enough for the use case?
For my use case: definitely 👍
IMO, this should be per-container only for standalone container. For containers that are part of compose stacks, it should bring down all the containers in the compose simultaneously (gracefully) because the containers in a stack may depend on each other.
Additionally, it'd be great if there was a rolling schedule, one compose at the time, to minimise total downtime per service.
Finally, for service with large volumes, it'd be great if a copy was taken while it was running, then the compose/container was taken down, and only delta copied (same behaviour as rsync --delete) against the while-running copy to correct any inconsistencies, so that services with large volumes are only down while the delta is being done, which should be virtually instant even for very large volumes.