lspcontainers.nvim icon indicating copy to clipboard operation
lspcontainers.nvim copied to clipboard

gopls container does not start in podman with default options

Open jgero opened this issue 2 years ago • 3 comments

I haven't used the go lspcontainer in a while and I just noticed there are some problems for me with the default options. I use podman and the --user and --network flags cause trouble. If I simply provide a custom cmd builder and remove these flags it works again.

So when I keep the network flag I run into the following error:

[START][2022-05-15 14:28:02] LSP logging initiated
[ERROR][2022-05-15 14:28:03] .../vim/lsp/rpc.lua:420	"rpc"	"podman"	"stderr"	'time="2022-05-15T14:28:03+02:00" level=warning msg="Error validating CNI config file /var/home/jgero/.config/cni/net.d/87-podman.conflist: [failed to find plugin \\"bridge\\" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin] failed to find plugin \\"portmap\\" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin] failed to find plugin \\"firewall\\" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin] failed to find plugin \\"tuning\\" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin]]"\n'
[ERROR][2022-05-15 14:28:03] .../vim/lsp/rpc.lua:420	"rpc"	"podman"	"stderr"	'time="2022-05-15T14:28:03+02:00" level=warning msg="Failed to load cached network config: network podman not found in CNI cache, falling back to loading network podman from disk"\ntime="2022-05-15T14:28:03+02:00" level=warning msg="1 error occurred:\\n\\t* plugin type=\\"tuning\\" failed (delete): failed to find plugin \\"tuning\\" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin]\\n\\n"\n'
[ERROR][2022-05-15 14:28:03] .../vim/lsp/rpc.lua:420	"rpc"	"podman"	"stderr"	'Error: plugin type="bridge" failed (add): failed to find plugin "bridge" in path [/usr/local/libexec/cni /usr/libexec/cni /usr/local/lib/cni /usr/lib/cni /opt/cni/bin]\n'

And when I keep the user frag I get:

[START][2022-05-15 14:42:10] LSP logging initiated
[ERROR][2022-05-15 14:42:11] .../vim/lsp/rpc.lua:420	"rpc"	"podman"	"stderr"	"groupmod: GID '1000' already exists\n"

I am on a Fedora Silverblue 36 install and I haven't changed anything in the default podman config and I don't have any problems elsewhere and I use containers heavily all day so I am quite (but not 100%) positive my podman install and system are fine. I am unsure what the reasons for the network flag are, because bridge network is the default in podman and docker and it works as intended when just omitting the flag. The same for the user flag. I am not saying these flags should be removed, I just don't quite understand what's happening and thus can't pinpoint if I am using something in a wrong way.

jgero avatar May 15 '22 13:05 jgero

The user part of my issue is probably related to lspcontainers/dockerfiles#65.

jgero avatar May 15 '22 13:05 jgero

I found the problem for the network bridge. Rootless Podman containers do not have access to the host network bridge, that's why there are errors when trying to pass --network=bridge as an option. The correct (and default) option for rootless containers is slirp4netns, which provides a separate user network stack with internet access for that container (source podman-run man page).

My proposal would be to omit the network flag when internet access is necessary in the container, because Docker and Podman (rootful and rootless) have this as their default option anyways. Only if the user wants to provide a specific network flag the option should be set. For containers which do not require internet access the --network=none flag can remain as default.

jgero avatar May 25 '22 09:05 jgero

Another option would be to check the container runtime and either use bridge or slirp4netns depending on that.

jgero avatar May 25 '22 10:05 jgero

I recently made some changes to the gopls server - would you mind checking and seeing if this is still an issue? Thank you!

erikreinert avatar Oct 24 '22 00:10 erikreinert

Closing as I am getting reports this is now working - please reopen if still an issue.

Thank you!

erikreinert avatar Nov 07 '22 20:11 erikreinert