Rabbitmq with Docker Swarm
Name and Version
bitnami/rabbitmq:3.10.6
What steps will reproduce the bug?
Persistence data (messages) not working when deploying the project with Docker swarm but It's ok when using docker-compose
Here's the part of my container spec:
{
"Image": "bitnami/rabbitmq:3.10.6",
"Hostname": "rabbit4-db",
"Mounts": [
{
"Type": "volume",
"Source": "7etlowgmbhn4xnzu",
"Target": "/bitnami",
"VolumeOptions": {
"DriverConfig": {
"Name": "my-own-volume-provisioner",
"Options": {
"size": "5GB",
"uid": "1001"
}
}
}
}
],
"Isolation": "default"
}
Data volume contains:
foo@bar: $ pwd
./rabbitmq/mnesia
foo@bar: $ readlink mnesia
/var/lib/rabbitmq/mnesia
foo@bar: $ ls -ltrha
lrwxrwxrwx 1 root root 24 Jul 28 13:01 mnesia -> /var/lib/rabbitmq/mnesia
Publish and subscribe functions work properly but when my swarm service restarts for any reason, I lost all messages in queues
What is the expected behavior?
I need my data to persist even after the container is removed.
What do you see instead?
Unfortunately Lost messages in the queue.
However, I follow rabbit docs by using { durable: true } and { persistent: true }
Additional information
No response
More information:
mounting two volumes: one for /bitnami and another for /var/lib/rabbitmq/ to fix my problem.
Do you think it's the right approach?
Docker Swarm is not something officially supported since we are not testing our images using that technology. Bitnami containers are tested as part of docker-compose and Helm charts.
For questions about the use of technology or infrastructure; we highly recommend checking forums and user guides made available by the project behind the application or the technology.
That said, we will keep this ticket open until the stale bot closes it just in case someone from the community adds some valuable info.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.