prometheus-podman-exporter
prometheus-podman-exporter copied to clipboard
Bump github.com/containers/common from 0.60.2 to 0.60.3
Bumps github.com/containers/common from 0.60.2 to 0.60.3.
Release notes
Sourced from github.com/containers/common's releases.
v0.60.3
What's Changed
- [v0.60] Bump c/image to v5.32.2, c/common to v0.60.2 by
@TomSweeneyRedHatin containers/common#2127- Some pkg/netns improvements by
@Luap99in containers/common#2170Full Changelog: https://github.com/containers/common/compare/v0.60.2...v0.60.3
Commits
8264002Bump to v0.60.32776f6bpkg/netns: remove NewNSWithName()8a5b951pkg/netns: add NewNSFrom()50870e9pkg/netns: ensure makeNetnsDir is race free322f2c2pkg/netns: split out makeNetnsDir logic52c82b1Merge pull request #2127 from TomSweeneyRedHat/dev/tsweeney/v0.60.2- See full diff in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Hi, thanks for opening this issue. Not removing networks when stopping the service was a design decision. The problem is that I don't remember why we made this decision. @alexlarsson @rhatdan @vrothberg any chance you remember? Do you see an issue with removing the network when stopping the service?
@Luap99 Thoughts ^^
I guess it could have been used by another container. But podman would fail on removal for that case.
First this is already documented
Please note that stopping the corresponding service will not remove the podman network. In addition, updating an existing network is not supported. In order to update the network parameters you will first need to manually remove the podman network and then restart the service.
The dependency management order would be complicated I think. And this isn't strictly limited to networks, AFAIK volume units, image and build units would behave the same?
Overall there seems to be use cases where this is required and would make things much simpler, i.e. https://github.com/linux-system-roles/podman/pull/155 https://github.com/linux-system-roles/podman/pull/160.
A network can be used by many containers so it is started when the first container is stared but I don't see a way to achieve it that it would be stopped when the last container on the network would be stopped. And I am not sure wouldn't systemd stop the unit on shutdown implicitly does remove the network every time? And then how does the order work there? The command would fail if it is stopped before all containers. People monitoring failed units would definitely not like this.
And this isn't strictly limited to networks, AFAIK volume units, image and build units would behave the same?
Yes, you are right. All oneshot services (.network, .volume, .image and .build) do not remove the resource they created.
The dependency management order would be complicated I think
I'm not sure about that. Isn't this for systemd to worry about? Of course assuming all containers that use the resource are managed via Quadlet.
Now, assuming we want resources to be removed, which ones? For sure, we wouldn't want to delete a volume. Similarly (but a bit less strong), we wouldn't want to delete an image that was already pulled or built. So, it seems that .network is the only one that can be considered. I'm not sure it's worth making it an exception.
WDYT?
I'm not sure about that. Isn't this for systemd to worry about? Of course assuming all containers that use the resource are managed via Quadlet.
Well if you can tell me how do you configure these systemd dependencies so that they work correctly sure. I don't see how this would work but I haven't spend a lot of time on it.
Well if you can tell me how do you configure these systemd dependencies so that they work correctly sure. I don't see how this would work but I haven't spend a lot of time on it.
First, users assume that Quadlet configures these dependencies correctly. If it doesn't, we should address it.
As for how, the following Quadlet types support the Network key: .container, .kube, .build and .pod. If the value of the key ends of .network, Qualdet gets the name of the network to use in the podman command and the name of the service to set Requires and After of the service so that it depends on the service of the .network file.
Well if you can tell me how do you configure these systemd dependencies so that they work correctly sure. I don't see how this would work but I haven't spend a lot of time on it.
First, users assume that Quadlet configures these dependencies correctly. If it doesn't, we should address it.
As for how, the following Quadlet types support the
Networkkey:.container,.kube,.buildand.pod. If the value of the key ends of.network, Qualdet gets the name of the network to use in the podman command and the name of the service to setRequiresandAfterof the service so that it depends on the service of the.networkfile.
The issue here is not the start path but the stop path, if we add a ExecStop that removes the network to the unit AFAIK systemd will trigger that on (i.e. shutdown) so how can we guarantee that it only removes the network after all containers on the network were stopped/removed? And if a user manually calls systemctl stop on the network unit would it correctly stop/remove all containers first or error out? It is not clear to me how this would look by just using systemd dependencies?
AFAIK this is exactly what systemd knows how to do. I've tried the following:
fedora-isolated.network:
[Network]
fedora-isolated.container:
[Container]
Image=registry.fedoraproject.org/fedora:40
Exec=bash -c "sleep inf"
Network=fedora-isolated.network
- Starting the container's service, starts both services.
- Stopping the container's service, stops only this service.
- Stopping the network's service first stopped the container's service and only then the network's one (validated the order using the journalctl logs).
Sure, this is not a full blown test. But, again, this is why Podman relies on systemd and doesn't implement this on its own.
But, again, this is why Podman relies on systemd and doesn't implement this on its own.
By no means do I want this implemented in podman. I simply wasn't sure if the 1 to N dependency mapping works properly with the After/Requires in systemd and what happened if there is a direct stop call on the network unit.
If systemd by default waits for all containers to be stopped and removed first then I see no reason to not have the ExecStop command on the network.
And yes I agree we do not want to remove volumes and images as this would be rather destructive/expensive. As such the question to me is it worth it to special case network units to do this?
I double checked now with 2 .container files. Stopping the network service first stops the container services and waits for them to stop. Only once they stop, the network service is stopped.
As such the question to me is it worth it to special case network units to do this?
Yes, this is my question as well
If systemd by default waits for all containers to be stopped and removed first then I see no reason to not have the ExecStop command on the network.
I believe from an end-user perspective it would be a good idea to implement ExecStop to have a coherent experience with other systemd behaviour.
Should I open a pull request?
Since we will not do the same for .volume, .image or .build, we won't have a consistent behavior. So, the question is what's more/less consistent.
This caught me out! I thought that stopping the service would cause the network to be torn down as that seems like the most logical behavior to me.
@Chaz6 I understand what you're saying. But, also see my previous comment: https://github.com/containers/podman/issues/23678#issuecomment-2315487930
Since we will not do the same for
.volume,.imageor.build, we won't have a consistent behavior. So, the question is what's more/less consistent.
Maybe we can add something to Quadlet files, just like:
[X-Network]
UpdateExists=true
DeleteWhenStop=true
@jyxjjj Thanks for the suggestion.
Not sure what you had in mind for UpdateExists. But, DeleteWhenStop sounds good. Care to open a PR?
@jyxjjj Thanks for the suggestion. Not sure what you had in mind for
UpdateExists. But,DeleteWhenStopsounds good. Care to open a PR?
Sorry, i can go but i don't good at go, i am a php developer. So i cannot submit for such a strong project. It still hard for me to read the sources.
And UpdateExists means 'podman network update'.
Waiting for others' contribution. Thank you for your invitation.
Thanks for clarifying. I think update will be a bit too complicated to implement into a single line as it will require checking for the existence of the network. In addition, according to the man gage, the command is very limited:
Allow changes to existing container networks. At present, only changes to the DNS servers in use by a network is supported.
NOTE: Only supported with the netavark network backend.
So, I think DeleteOnStop should do