ansible-dockerswarm icon indicating copy to clipboard operation
ansible-dockerswarm copied to clipboard

Getting fatal error on `Get list of labels.` task

Open cpxPratik opened this issue 4 years ago • 19 comments

The task Get list of labels. is failing after updated with ansible_fqdn on https://github.com/atosatto/ansible-dockerswarm/commit/3bb8a49297448325b8feaa9b7a899c78b2fab97e

The node hostname(staging-manager-03) on docker node ls is different from the fqdn string given on following error:

TASK [atosatto.docker-swarm : Get list of labels.] ********************************************************************************************************************************************
fatal: [165.22.48.107 -> 165.22.48.105]: FAILED! => {"changed": false, "cmd": ["docker", "inspect", "--format", "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}", "staging-manager-03.sgp1"], "delta": "0:00:00.412684", "end": "2020-05-14 13:10:42.573599", "msg": "non-zero return code", "rc": 1, "start": "2020-05-14 13:10:42.160915", "stderr": "Error: No such object: staging-manager-03.sgp1", "stderr_lines": ["Error: No such object: staging-manager-03.sgp1"], "stdout": "", "stdout_lines": []}

For now I am using v2.2.0 which gives no error.

cpxPratik avatar May 14 '20 14:05 cpxPratik

I have the same issue except for 'ambigious' instead of not found. TASK [atosatto.docker-swarm : Get list of labels.] ******************************************************************************************************************************************* fatal: [asus.yi -> None]: FAILED! => {"changed": false, "cmd": ["docker", "inspect", "--format", "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}", "host.domain"], " delta": "0:00:00.335281", "end": "2020-05-15 22:58:12.700418", "msg": "non-zero return code", "rc": 1, "start": "2020-05-15 22:58:12.365137", "stderr": "Error response from daemon: node host.domain is ambiguous (2 matches found)", "stderr_lines": ["Error response from daemon: node host.domain is ambiguous (2 matches found)"], "stdout": "", "stdout_lines": []}

Edit: Workaround for me was simply making the node leave. 'docker swarm leave --force'.

wombathuffer avatar May 15 '20 21:05 wombathuffer

Thanks @cpxPratik for reporting this issue. I'll try to reproduce this issue in a test cluster and figure our a better way of managing nodes.

Can you please confirm me the docker version you are using?

atosatto avatar May 18 '20 07:05 atosatto

@atosatto The docker version is Docker version 19.03.8, build afacb8b7f0

cpxPratik avatar May 18 '20 08:05 cpxPratik

Hello, I'm having the same issue on a cluster. It seems the node object is using hostname instead of the full FQDN.

It seems the this is the root-cause: https://github.com/atosatto/ansible-dockerswarm/commit/3bb8a49297448325b8feaa9b7a899c78b2fab97e

Though, I don't see any references in the playbook that it joins by FQDN, is this a new change on upstream docker?

yukiisbored avatar May 22 '20 13:05 yukiisbored

btw, I'm currently running version 19.03.8

yukiisbored avatar May 22 '20 13:05 yukiisbored

Same issue here, using 19.03.6 (latest Ubuntu 18.04 provided docker.io package)

FleischKarussel avatar May 23 '20 11:05 FleischKarussel

I have same issue too. Ubuntu 18.04.

Bogdan1001 avatar May 23 '20 18:05 Bogdan1001

@atosatto We fixed this a while back but it was reverted or we mixed it up. It's inventory_hostname vs fqdn.

till avatar May 24 '20 13:05 till

Workaround for me was: replace {{ ansible_fqdn|lower }} on {{ ansible_hostname }} and from the hostname remove all dots. Was node1.connect become node1connect

Bogdan1001 avatar May 28 '20 19:05 Bogdan1001

@atosatto We fixed this a while back but it was reverted or we mixed it up. It's inventory_hostname vs fqdn.

@till I thought the same and tried to work around by listing the hosts as fqdns in my inventory. No luck.

gumbo2k avatar Jun 16 '20 11:06 gumbo2k

Hello. I'm also having this issue. Any plans to reaply the fix? Thanks!

nununo avatar Jul 18 '20 13:07 nununo

I can confirm that the commit 3bb8a49 mentioned in the issue #82 is the one that breaks the labels setup, if it is reverted then the playbook finishes without issues.

juanluisbaptiste avatar Aug 01 '20 19:08 juanluisbaptiste

Seeing the same behaviour on v2.3.0, rolling back to v2.2.0 resolves this situation.

joshes avatar Aug 31 '20 22:08 joshes

Another case where this happens is the following:

I had botched my swarm setup, so it was not about node names (e.g. inventory name or fully qualified domain name (fqdn)), but the nodes were no longer seen by the manager.

The role doesn't handle this (no judgement meant) currently. I think it's a split brain/no brain kind of thing, because I had restarted my manager (and I run only one) and then this happened.

The fix was the following:

  1. get the join-token myself
  2. then force leave the workers
  3. (re-)join the manager/cluster

And then the role completes.

The other fix is to run two managers. ;-)

I am not entirely sure how this could be added to the role since the manager doesn't see the workers anymore, but the works think they are still connected. If you can afford it, trash the nodes and setup again. Maybe it's a documentation thing after all?

till avatar Sep 30 '20 08:09 till

Same issue on Centos 7.

For now I am using v2.2.0 which works like a charm !

quadeare avatar Oct 06 '20 13:10 quadeare

I can confirm that the commit 3bb8a49 mentioned in the issue #82 is the one that breaks the labels setup, if it is reverted then the playbook finishes without issues.

Now I'm not sure if this has to do with this at all, as I have been getting this error several times too with that commit reverted. It always happens when I add a new instance to the cluster. First time I run this role is ok, then I create a new aws instance and run again this role to add it to the cluster and the role fails with this error. This is the error message I'm seeing being thrown by ansible on nodes that are already part of the cluster:

<10.0.10.36> (0, b'', b'')
fatal: [10.0.10.36 -> 10.0.10.36]: FAILED! => {
    "changed": false,
    "cmd": [
        "docker",
        "inspect",
        "--format",
        "{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}",
        "10"
    ],
    "delta": "0:00:00.081487",
    "end": "2020-10-15 23:41:24.604902",
    "invocation": {
        "module_args": {
            "_raw_params": "docker inspect --format '{{ range $key, $value := .Spec.Labels }}{{ printf \"%s\\n\" $key }}{{ end }}' 10",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "stdin_add_newline": true,
            "strip_empty_ends": true,
            "warn": true
        }
    },
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2020-10-15 23:41:24.523415",
    "stderr": "Error: No such object: 10",
    "stderr_lines": [
        "Error: No such object: 10"
    ],
    "stdout": "",
    "stdout_lines": []
}

That is the error for the manager, but the workers throw it too.

juanluisbaptiste avatar Oct 16 '20 17:10 juanluisbaptiste

Same issue on Centos 7.

For now I am using v2.2.0 which works like a charm !

For me it also happens with v2.2.0 as described on my previous comment.

juanluisbaptiste avatar Oct 16 '20 18:10 juanluisbaptiste

I had to use this role again and got an error when running it for the second time, and this time I noticed that the error was different to the one of this issue (and probably the error reported in my previous comment was about this new issue and not related to this one). This time the error is on the "Remove labels from swarm node" task, and it occurs when labels are configured outside this role (ie, manually adding a role to a node). I will create a separate issue for that with an accompanying PR fixing it.

juanluisbaptiste avatar May 20 '21 17:05 juanluisbaptiste

I had to use this role again and got an error when running it for the second time, and this time I noticed that the error was different to the one of this issue (and probably the error reported in my previous comment was about this new issue and not related to this one). This time the error is on the "Remove labels from swarm node" task, and it occurs when labels are configured outside this role (ie, manually adding a role to a node). I will create a separate issue for that with an accompanying PR fixing it.

Added issue #96 for this and fixed on PR #97, I hope it gets merged (although I do not have my hopes up that it will happen hreh).

juanluisbaptiste avatar May 20 '21 17:05 juanluisbaptiste