docker-volume-backup icon indicating copy to clipboard operation
docker-volume-backup copied to clipboard

Stopping containers during backup that are part of a stack replicates containers

Open Cozmo25 opened this issue 5 years ago • 6 comments

What's the recommended way to prevent containers that are part of a service and stopped during backup from restarting and effectively scaling up those services?

i.e. PGADMIN service is stopped during backup, backup takes place, after backup is complete I now have 2 instances of PGADMIN service running when I only require 1

Cozmo25 avatar Jul 22 '20 03:07 Cozmo25

Also curious about this as I'm investigating whether this tool will also help solve my backup needs with Docker Local named Volumes in Swarm Clusters...

prologic avatar Aug 20 '20 00:08 prologic

@prologic I was able to change the restart_policy setting to “on-failure” which resolved this problem https://docs.docker.com/compose/compose-file/#restart

I did encounter other problems with my service names not being re-registered with my Traefik proxy after the restart but that’s another issue

Cozmo25 avatar Aug 20 '20 00:08 Cozmo25

That would be a bit of a blocker for me as I also use Traefik as my ingress. Hmmm 🤔

prologic avatar Aug 20 '20 03:08 prologic

Have to say I haven't thought about the interactions with orchestrators at all.

So if you figure out elegant solutions, feel free to post them here, and I'll try to update the README accordingly.

jareware avatar Oct 19 '20 10:10 jareware

Old thread, but still relevant problem: All of my containers are deployed with docker stack deploy. I've tried a couple things. I have the container label set to docker-volume-backup.stop-during-backup=true, and the corresponding /var/run/docker.sock mounted for the backup container:

when I have my restart_condition: set to on-failure, then backup successfully stops the containers as expected, but they go in to a complete state. Watching the output of /backup.sh I see it successfully archives the volume data, then it says: [info] Starting containers back up with the container IDs following, but, it does not actually start the containers, they stay at complete state. Bummer

I tried setting restart_condition to any, and that's not great either: When backup stops the containers, they are immediately re-deployed, which means they're in there touching the volume data before the backup job is done.

One work-around I found, is to change restart_condition to always and set delay: 60s. This (in my case) is long enough for the backup job to complete, then Docker Swarm orchestrator spins up a replacement container, long after the job is completed (though it could still be uploading at that point, but that doesnt matter).

Has anyone figured out how to have the backup container successfully manage the startup of the stopped container instance, when using docker stack deploy?

OMGTheCloud avatar Nov 10 '21 20:11 OMGTheCloud

Yeah, using a fixed delay isn't great, but at least it works it seems.

Can't say I have better ideas, sorry.

jareware avatar Dec 28 '21 18:12 jareware