ansible-podman-collections
ansible-podman-collections copied to clipboard
Warning about improperly configured remote target
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When running with latest ansible (devel branch), a warning shows on every task run using the podman connection plugin. I am using molecule
Steps to reproduce the issue:
- Will provide if indeed needed
Describe the results you received:
TASK [twmn.twmn.ubuntu_setup : Gather facts] ***********************************
[WARNING]: The "podman" connection plugin has an improperly configured remote
target value, forcing "inventory_hostname" templated value instead of the
string
ok: [molecule_test]
Describe the results you expected: No warnings
Additional information you deem important (e.g. issue happens only occasionally): Here is the ansible issue where this bug was discussed for the docker collection.
Version of the containers.podman
collection:
Either git commit if installed from git: git show --summary
Or version from ansible-galaxy
if installed from galaxy: ansible-galaxy collection list | grep containers.podman
1.9.3 (installed by molecule)
Output of ansible --version
:
ansible [core 2.14.0.dev0]
config file = None
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.4 (main, Mar 23 2022, 23:05:40) [GCC 11.2.0] (/usr/bin/python3.10)
jinja version = 3.1.2
libyaml = True
Output of podman version
:
Client: Podman Engine
Version: 4.1.0
API Version: 4.1.0
Go Version: go1.18.1
Git Commit: e4b03902052294d4f342a185bb54702ed5bed8b1
Built: Fri May 6 18:18:30 2022
OS/Arch: linux/amd64
Output of podman info --debug
:
host:
arch: amd64
buildahVersion: 1.26.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.1.0-1
path: /usr/bin/conmon
version: 'conmon version 2.1.0, commit: bdb4f6e56cd193d40b75ffc9725d4b74a18cb33c'
cpuUtilization:
idlePercent: 91.6
systemPercent: 3.33
userPercent: 5.07
cpus: 4
distribution:
distribution: arch
version: unknown
eventLogger: journald
hostname: earth-nikolaos
idMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.17.5-arch1-1
linkmode: dynamic
logDriver: journald
memFree: 11301785600
memTotal: 16779542528
networkBackend: cni
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 1.4.5-1
path: /usr/bin/crun
version: |-
crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.4
swapFree: 0
swapTotal: 0
uptime: 227h 57m 24.83s (Approximately 9.46 days)
plugins:
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /home/nikos/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 1
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/nikos/.local/share/containers/storage
graphRootAllocated: 52721041408
graphRootUsed: 41214480384
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 5
runRoot: /run/user/1000/containers
volumePath: /home/nikos/.local/share/containers/storage/volumes
version:
APIVersion: 4.1.0
Built: 1651861110
BuiltTime: Fri May 6 18:18:30 2022
GitCommit: e4b03902052294d4f342a185bb54702ed5bed8b1
GoVersion: go1.18.1
Os: linux
OsArch: linux/amd64
Version: 4.1.0
Package info (e.g. output of rpm -q podman
or apt list podman
):
podman 4.1.0-1 (pacman -Q podman)
Playbok you run with ansible (e.g. content of playbook.yaml
):
# molecule.yml used
dependency:
name: galaxy
driver:
name: podman
platforms:
- name: molecule_test
image: geerlingguy/docker-ubuntu1804-ansible
pull: true
pre_build_image: true
privileged: true
cap-add: ALL
volumes:
- /lib/modules:/lib/modules
lint: ...
provisioner:
name: ansible
playbooks:
side_effect: side_effect.yml
verifier:
name: ansible
Command line and output of ansible run with high verbosity
Please NOTE: if you submit a bug about idempotency, run the playbook with --diff
option, like:
ansible-playbook -i inventory --diff -vv playbook.yml
Additional environment details (AWS, VirtualBox, physical, etc.):
Noticed this message too today. docker connection plugin got the same issue and was solved by https://github.com/ansible-collections/community.docker/pull/297 so the connection plugin needs the same kind of threatment.
From what I understand, the idea is to replace self._container_id = self._play_context.remote_addr
with self.get_option('remote_addr')
. The problem is that it possibly can't work in __init__
, so a possible solution is to replace self._container_id
with self._get_container_id()
, _get_container_id()
being defined with something like that:
def _get_container_id(self):
_container_id = self.get_option('remote_addr')
# compat
if _container_id is None and self._play_context.remote_addr is not None:
_container_id = self._play_context.remote_addr
Hello any news on this bug ? can we upstream the fix ?
I have also hit this issue today - has there been any progress on a fix?
This is an opensource community project. All pull requests are very welcome.
I think the same idea was here: https://github.com/ansible/ansible/issues/77841#issuecomment-1130211123
Fixed in #506