home-manager
home-manager copied to clipboard
podman: init module
Description
Fixes #4336. This module will add support on Linux systems to create podman containers and networks. This has been tested on NixOS, Ubuntu 24.04 and Arch (on ~28/08/24), and should work across any home-manager compatible distro.
podman.containers
aims to use an interface similar to NixOS oci-containers
, but this module does not write the systemd units. Podman already includes a robust systemd unit generator - their Quadlet configuration interface - which I have opted to use.
podman.networks
is implemented using the Quadlet unit generator as well. The module which builds the derivations is common to all quadlets, so volumes and other resources should be easy to implement following networks as a guide.
Using the Quadlets generator vastly simplifies the process of creating the systemd units to run the podman containers because we simply don’t have to. The quadlet API, ie the options used here, are stable.
Usage example
{ … }:
let
homeDir = “/home/myuser”;
in {
services.podman.networks.caddy_routing = {
driver = "bridge";
subnet = "172.21.1.0/24";
};
services.podman.containers.caddy = {
image = "ghcr.io/n-hass/caddy-cloudflare:latest";
description = "Caddy web server";
environmentFile = "${homeDir}/caddy/.env";
network = [ "caddy_routing" ];
networkAlias = "caddy";
ports = [
“8080:80"
“8443:443"
];
volumes = [
"${homeDir}/caddy/Caddyfile:/etc/caddy/Caddyfile:ro"
"${homeDir}/caddy/config:/etc/caddy/config"
"${homeDir}/caddy/data:/data"
];
autoUpdate = "registry”;
extraConfig.Service = {
TimeoutStopSec = 60;
};
addCapabilities = [ "NET_RAW" ];
};
}
Checklist
-
[x] Change is backwards compatible.
-
[x] Code formatted with
./format
. -
[x] Code tested through
nix-shell --pure tests -A run.all
ornix develop --ignore-environment .#all
using Flakes. -
[x] Test cases updated/added. See example.
-
[x] Commit messages are formatted like
{component}: {description} {long description}
See CONTRIBUTING for more information and recent commit messages for examples.
-
If this PR adds a new module
- [x] Added myself as module maintainer. See example.
Maintainer CC
(After moving from Draft -> Ready for review, I will not squash until ready to merge to keep track of reviewed changes)
Sorry it took so long, but I finally had some time to sit down and review this. Great work, but as I was looking through it, the reasoning behind having as few options as is reasonable started to make sense. The reasoning for this is that if some changes were made to the upstream project, it would require an update to home manager in order to fix them. As such, I feel that it makes more sense to try and implement this in the same way that NixOS implements the following systemd options:
Doing it this way would mean that if, for example, PublishPort
became Port
in a future release of podman, users would be able to make this change on their own without waiting for us to fix it in HM. Additionally, we wouldn't have to worry about all the headache that would come with determining if it should be PublishPort or Port on the user's system. Quadlet does seem pretty stable at this point, so I don't expect that literal change to happen, but something more likely may be the addition of new arguments which we would then have to add to HM.
Looking forward to your feedback on this.
@bamhm182 Yeah I get your point. I've just implemented quadlet networks for this module now too using the method you described (pushed+squashed). but TBH I like the NixOS way of doing things more than the home-manager systemd services you linked. There are plenty of other home-manager modules with tight-integration with the program's settings (ie gpg, oh-my-zsh, etc.) and I like that there's a bit of flexibility to do 'smarter' things, like defining an auto-start option which adds the appropriate WantedBy linkages.
I've been using this for a few months now and consider it reliable IMHO. haha.
Glad it has been working well for you. I have unfortunately fallen back to how I was doing it before since I got linger working on 23.11. 😓 Would love to see this way get integrated though since it worked super well in my initial testing I did.
@rycee Do you have any thoughts or reservations on how I've implemented this module so far? I'm planning to just add tests now ~~and fix whatever this nmt-report-gpg-immutable-keyfiles.drv
issue is~~. I can see other PRs have had this keyfiles issue and it has been safely ignored.
I believe there are some overlap here, both PRs in the final stage: https://github.com/nix-community/home-manager/pull/4331
I believe there are some overlap here, both PRs in the final stage: #4331
That one seemed stagnant when this one was created, but it looks like it has seen some activity since then.
https://github.com/nix-community/home-manager/issues/4336#issuecomment-1867180896
@terlar @rycee I have reached a point ready for review and added tests. Would appreciate your input please :)
I'm having some trouble with the sd_notify on these:
Aug 13 16:10:40 quiver systemd[1700]: ollama.service: Got notification message from PID 1088345 (MAINPID=1088441, READY=1)
Aug 13 16:10:40 quiver systemd[1700]: ollama.service: New main PID 1088441 does not belong to service, refusing.
It appears systemd doesn't like the conmon pid, despite the service definition having NotifyAccess=All
.
this is with podman 5.2.0 and sd 256.2 on unstable. any ideas?
looks like I might be running into https://github.com/mullvad/mullvadvpn-app/issues/3299 - tl;dr mullvad breaks cgroups for rootless podman, not an issue with this module at all. apologies
edit: for anyone similarly effected, putting this in your main config should fix it:
systemd.services.mullvad-daemon.environment.TALPID_NET_CLS_MOUNT_DIR = "/opt/net-cls-v1";
edit 2: nope, that's broken too, only uninstalling mullvad and rebooting works :disappointed: perhaps an assert to check if mullvad vpn app is installed will save future users from frustration?
back again after some more testing :) it seems setting networkMode
duplicates itself in the X-Container section and service exec line, causing networkMode = "host"
to fail, as the resultant --network=host
flag can only be used once (since multiple networks require a bridge, and I guess it doesn't bother checking the setting)
also: I have found that occasionally the podman-xyz-network service will fail to find newuidmap
when launched automatically. shadow
is clearly present on the path, and attempting to rerun the service manually usually makes it work again. I can't reliably reproduce but 1. created network already existing and 2. the network service being auto run by the main container service both seem to be preconditions (as in, it succeeds in any other circumstance).
it seems setting
networkMode
duplicates itself in the X-Container section and service exec line, causingnetworkMode = "host"
to fail
Thanks @atagen nice catch, that’s left over from me playing around... Deciding what to put in the Network field is a little annoying since there are some rules and different formats (see the podman man page). For the sake of not making this PR any longer, I am trying to keep it simple.
It’s intended that networks are defined with services.podman.networks
, and for a container to use it it should be listed on services.podman.containers.<name>.networks
. Other network options are put under services.podman.containers.<name>.networkMode
, such as “host” as you have used. networkMode
is probably a bad name considering that Quadlet just maps this to the —network
flag on podman run, so it can technically be used for more than just setting the network mode.
occasionally the podman-xyz-network service will fail to find newuidmap when launched automatically
I can’t reproduce this, but it should be pretty easy to configure a restart policy if this is happening infrequently.
- created network already existing and 2. the network service being auto run by the main container service both seem to be preconditions
I’m not quite sure what you mean with 1 - If i define a new network with services.podman.networks.my-test-net
, it gets created on activation.
Regarding 2, if a container is being attached to a network, the systemd service is configured to require that network service to be running (which is why you see it start if not already when the container starts). Podman’s behaviour is that it will create the network if it doesn’t exist, or leave it alone if the configuration matches and it exists. So the intention with writing this dependency in is that it ensures the network is available and configured correctly before starting the container.
I’m not quite sure what you mean with 1 - If i define a new network with services.podman.networks.my-test-net, it gets created on activation
I meant that if I subsequently do podman network prune
and delete it, a dependant service activation recreates the network fine - it all works as expected and as you state. based on what I've observed restart policy should "fix" it, but I am curious where it's coming from to begin with (ie. why the shadow binaries aren't found sometimes).
to be clear, by 2 I meant that manually doing systemctl --user start
on the network creation service basically always succeeds, in contrast to being run as a dependency of another service
@atagen Cool thanks for the clarification :) yeah I would like to get to the bottom of that newuidmap
missing - a restart policy would be a bandaid but its hard to know where to start if it isn’t reliably reproducable. It might be as simple as needing to specify another dependency, or even a bug in podman...