Podman Compose Prevents Updating a Single Service Due to Dependency Constraints
Describe the bug
When using Podman Compose, it is not possible to update a single service (e.g., app) without affecting dependent services (e.g., proxy) due to strict dependency enforcement. This issue occurs when attempting to rebuild and restart a service with dependent containers, even if the service is stopped. In larger stacks with multiple interdependent services, this forces a complete stack shutdown to update a single service.
This behavior contrasts with Docker Compose, where individual services can be updated without impacting dependencies.
To Reproduce
- Setup:
- Create a directory structure:
.
|-docker-compose.yaml
|-modules
| |-app
| | |-index.html
| | |-Dockerfile
| |-proxy
| | |-Dockerfile
| | |-index.html
- Both
Dockerfiles are identical:
FROM docker.io/nginx:alpine
COPY ./index.html /usr/share/nginx/html/index.html
- Create
modules/app/index.html:
App Version 1
- Create
modules/proxy/index.html:
Proxy Version 1
- Create the
docker-compose.yaml:
version: '3.8'
services:
app:
container_name: "app"
build:
context: ./modules/app
dockerfile: Dockerfile
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
networks:
- app-net
proxy:
container_name: "proxy"
build:
context: ./modules/proxy
dockerfile: Dockerfile
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
networks:
- app-net
depends_on:
app:
condition: service_healthy
networks:
app-net:
driver: bridge
- Initial Run:
- Build and start the stack:
podman-compose build
podman-compose up -d
- Verify
appcontent:
podman exec -it app sh -c "curl http://localhost"
Output should be App Version 1
- Update Attempt:
- Modify
modules/app/index.html(you may usesed -i 's/App Version 1/App Version 2/' ./modules/service_a/index.html):
App Version 2
- Rebuild and update
app:
podman-compose build app && podman-compose down app && podman-compose up app -d
- This results in errors:
Error: container <app_container_id> has dependent containers which must be removed before it: <proxy_container_id>: container already exists
Error: creating container storage: the container name "app" is already in use by <app_container_id>. You have to remove that container to be able to reuse that name: that name is already in use
- Check
appcontent again:
podman exec -it app sh -c "curl http://localhost"
Output: Still App Version 1
- Problem:
- The
appcontainer cannot be removed or recreated becauseproxydepends on it, even whenappis stopped. - Running
podman-compose up -d apprestarts the old container instead of creating a new one with the updated image. - Updating
apprequires stopping and removing the entire stack, which is impractical for larger stacks.
Expected behavior In Docker Compose, a single service can be rebuilt and restarted without affecting its dependencies using:
docker-compose up -d --force-recreate --no-deps <service>
Podman Compose should offer similar functionality, allowing individual service updates without requiring the entire stack to be taken down.
Actual behavior Podman Compose enforces dependencies strictly, preventing the removal or recreation of a service if it has dependent containers. This makes it impossible to update a single service without stopping and removing all dependent services, leading to unnecessary downtime.
Output
podman-compose version
podman-compose version 1.1.0
podman version
Environment:
- OS: Linux / Debian 12 odman version Client: Podman Engine Version: 4.3.1 API Version: 4.3.1 Go Version: go1.19.8 Built: Wed Dec 31 21:00:00 1969 OS/Arch: linux/amd64 Additional context
In stacks where a service like proxy depends on multiple services (e.g., 10+ containers), updating a single service requires shutting down the entire stack. This is inefficient and causes significant operational disruption, especially for users migrating from Docker Compose.
If it is a problem with podman and not actually with podman-compose, then how are you guys actually updating images without destroying the entire stack? I will remove dependencies for now as a "solution"...
Is this a problem with roots in libpod? Any workarounds?
So the issue is when taking down the "app" service it wants to rm it, this fails due to the dependency from the proxy container. This failure is somewhat ignored by returning 0 exit code.
When bringing the app service back up the container is unable to be recreated because the app container was never deleted.
Some things I checked to use as a workaround:
- Force removing the app service - Fails
- Checking if we can remove the dependency from proxy temporarily while deleting - No option
Checking the code we actually do have an option to ignore dependence but this is currently only for internal use and not exposed.
I can't see any way to recreate the app container when proxy is depending on it.
So IMO the question is whether we should expose the ignore-deps option when removing a container which could then be used via podman-compose.
@Luap99 WDYT mate?
Thanks for the feedback! I’ve tried every parameter I could find to resolve this, but nothing has worked. I get why dependencies should be respected, but this limitation becomes a real issue in larger stacks. Let me explain why this matters and propose a solution.
Why This Is a Problem for Larger Stacks
Consider this service dependency graph:
+---------+
| proxy |
+---------+
/ \
/ \
+----------+ +----------+
| service A| | service B|
+----------+ +----------+
/ | \
/ | \
+-------+ +-------+ +-------+
| B1 | | B2 | | B3 |
+-------+ +-------+ +-------+
If I need to update service A (e.g., to a new image), podman-compose forces me to tear down the entire stack— proxy, service B, and all its dependencies (B1, B2, B3, etc.). This is a problem because:
-
Scale: If
service Bhas 10+ dependent services, restarting everything takes significant time (e.g., 5+ minutes vs. 3 seconds for justservice A). -
Disruption: Taking down
proxy(e.g., acaddyornginxcontainer) interrupts all services, even those unrelated to the update. -
Iteration: When iterating on
service A, the long rebuild times slow development.
In contrast, Docker Compose respects depends_on but allows a service to be briefly stopped and replaced without affecting the whole stack. With podman-compose, I can’t recreate service A because its container can’t be removed while proxy depends on it.
Since Podman already has an --ignore-deps option internally, could podman-compose expose it—perhaps tied to --force-recreate or a new flag? This would let users remove and recreate a specific container (like service A) without disrupting the stack. It’s a small change that would:
- Improve flexibility for managing dependencies.
- Reduce downtime in larger stacks.
- Align podman-compose closer to Docker Compose’s behavior.
Right now, to circumvent the problem, I've just disabled all of my depends_on to make changes and iterate over my containers, building and recreating then, without needing to destroy the whole stack
It seems a similar problem was cited in https://github.com/containers/podman/issues/18575 and https://github.com/containers/podman/issues/23898
In the latter the user Luap99 questions this:
So now you ended up with a broken container, podman simply does not allow such things and really see no reason why we ever should honestly. To me this sounds like a docker bug. Why would you allow to remove a container that is still in use by another?
I guess in this case, you are simply recreating the container with a new, up to date image, it is a valid problem, given that it really isn't a great workflow to have to destroy the entire stack.
But to be able to accomplish this, the service needs not only to stop, but to be erased/deleted, so a new container with the new image can take place. Podman won't let the container be deleted because of the dependency.
And I mean... If proxy depends on Service A, why does it let Service A goes down if it depends on it on the first place? The container can go down and it just won't complain, It will only complain when it gets deleted, but what's the difference between it being down and deleted? It's "offline" in both cases anyway.
I think in the case of podman-compose it makes more sense to allow a dependent container be recreated rather than in the context of a sysadmin running podman rm --force on a container they don't know has dependencies.
But as podman-compose simply consumes podman, I don't think we can expose one without allowing the other use case.
I think your "Why This Is a Problem for Larger Stacks" diagram is interesting for the discussion though so thank you for adding it here!
My opinions hasn't changed, podman dependency model is rather strict and wired deeply in our stack. I don't see that changing.
I acknowledge the use case but maybe then declaring a dependency is wrong.
depends_on:
app:
condition: service_healthy
AFAIK depends_on is a compose thing and not a regular docker or podman option as such I don't see why that option alone would trigger a podman dependency internally in podman. As such this is something podman-compose is passing explicitly to us via --requires?
Note podman-compose is a community project and not maintained by us podman maintainers.
And I mean... If proxy depends on Service A, why does it let Service A goes down if it depends on it on the first place? The container can go down and it just won't complain, It will only complain when it gets deleted, but what's the difference between it being down and deleted? It's "offline" in both cases anyway.
That is indeed problematic but naturally we cannot prevent a container process from exiting. The only option we would have in such case is to stop all other containers in the dependency chain but we never started doing that.
The reason deletion is so important is because it bricks your other container otherwise. The deps are resolved to the unique IDs so even if we were to recreate a container it will have a new ID a such all dependencies that point to the old ID in our DB would be broken as they no longer exist.
Hi @Luap99 and @ninja-quokka, thanks for your detailed responses!
I’d like to walk you through my real-world use case to show why this limitation creates significant friction, especially in larger stacks, and propose a way forward that could balance both perspectives.
My Setup
Here’s the stack I’m working with, similar to the diagram I shared earlier:
-
Proxy(
caddy): Exposed to the internet, routes requests likedocs.mydomain.comto the right service. -
Service A(
docs): A static site served bycaddyornginx, depended on by theproxy. -
Service B(
python API): A python server exposing an API, which depends on a cryptocurrency node.
The dependency chain, defined via depends_on, ensures a proper startup order:
-
Cryptocurrency node (a dependency of the Python API) and
docsstart first. - Python API starts once the cryptocurrency node is healthy.
- Proxy starts last, only after all services are up and running.
This setup works great for the initial deployment. The cryptocurrency node, which takes time to sync its ledger with the network, starts early, followed by the API, and finally the proxy—ensuring everything is accessible via the internet. So far, so good.
The Update Problem
The issue arises when I need to update just the docs service (e.g., to deploy a new image with updated content). Here’s my workflow:
podman-compose build docs && podman-compose down docs && podman-compose up docs -d
-
In Docker Compose, this sequence:
- Builds the new
docsimage. - Stops and removes the old
docscontainer. - Recreates it with the new image, all without touching the proxy or other services.
- Builds the new
-
In Podman / Podman Compose, however:
-
podman-compose down docsonly stops thedocscontainer, but doesn't remove it, becauseproxydepends on it. -
podman-compose up docs -dfails to recreate the container, as the old one still exists.
-
To update docs, I'm forced to run podman-compose down on the entire stack, which brings down:
- The proxy, interrupting access to all services (even those unrelated to
docs). - The Python API and its a cryptocurrency node, which then needs minutes to resync—causing unnecessary downtime.
This is a major pain point:
-
Development: Iterating on
docsbecomes slow and disruptive. - Production: Even a minor update shouldn’t require downtime for unrelated services.
Podman’s strictness forces me to choose between disabling depends_on (losing startup order) or accepting full-stack downtime.
Current Workarounds
-
Disabling
depends_on: As @Luap99 suggested, this “works” but breaks the startup order, which I need for reliability.
A Proposed Middle Ground
Since Podman already has an internal --ignore-deps option, could Podman-Compose expose it in a controlled way for specific operations? For example:
-
Add a
--force-recreateflag topodman-compose upthat:- Stops and removes the target container (e.g., docs), even if it has dependents.
- Recreates it with the new image.
- Updates dependency references to the new container ID automatically.
This would:
- Preserve Podman’s strictness as the default behavior.
- Give users flexibility to update a single service without tearing down the stack.
- Avoid broken dependencies by managing the recreation process within Podman-Compose.
@Luap99, You mentioned that depends_on is a Compose construct, not native to Podman, and questioned why it triggers a hard dependency. I agree it’s a Podman-Compose decision, could Podman-Compose handle depends_on more like Docker Compose, where it enforces startup order but doesn’t block container recreation? Also, you noted that deletion breaks dependencies due to unique IDs. Could Podman-Compose mitigate this by re-linking dependents to the new ID during recreation?
@ninja-quokka, You suggested that allowing recreation in Podman-Compose makes sense, even if it’s trickier with raw Podman commands. I think this supports the idea of a Podman-Compose-specific solution that leverages --ignore-deps internally without exposing it broadly.
My Questions
-
Technical Feasibility: Is there a reason Podman-Compose can’t use
--ignore-depsfor a recreation operation while updating dependency references? -
Alternative Approaches: How would you recommend updating
docsin my stack without disablingdepends_onor downing everything? Is there a best practice I’m missing? -
Dependency Design: If
depends_onis the wrong tool here, what’s the Podman-Compose way to enforce startup order and allow single-service updates?
Why I Care
I switched from Docker to Podman for its daemonless design and rootless capabilities, I love it! Even contributed for podman-compose to get the docker-compose.yaml file via. Everything works. But this update workflow is the one sticking point keeping me from a seamless experience. I’d really value your thoughts on how to make this work smoothly, whether it’s a feature tweak or a different approach to my stack.
Alright, for now I've found a "solution" for now. Just comment out the line #:
podman_args.append(f"--requires={deps_csv}")
on line 1035 on function container_to_args
It seems to keep the order but doesn't actually passes --requires so you can delete the container, this enables docker-compose like behavior to update and replace in place containers with new images.
I am using podman-compose version 1.1.0 with podman version 4.3.1.
I've tried running on version 1.3.0, but I found another error (it doesn't actually finishes the command), made another issue: https://github.com/containers/podman-compose/issues/1178
This is the main reason I really don't like Podman and will probably choose to not use it. Not being able to build a new image and deploy it without bringing down all dependent services isn't reasonable. The dependency in the compose has nothing to do with the images. Rather it's a dependency on start.
To make matters worse, Podman doesn't stop services in parallel. Which results in a long list of serialized stop commands and waiting...