kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

Ansible dont Gather if is ARM or AMD

Open klap50 opened this issue 2 years ago • 9 comments

TASK [container-engine/runc : download_file | Validate mirrors] *************************************************************************************************************************************************** ok: [py1] => (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py1 -> {{ download_delegate if download_force_cache else inventory_hostname }}] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py2] => (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py2 -> {{ download_delegate if download_force_cache else inventory_hostname }}] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py3] => (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py3 -> {{ download_delegate if download_force_cache else inventory_hostname }}] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [hellbox] => (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [hellbox -> {{ download_delegate if download_force_cache else inventory_hostname }}] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [master] => (item=None) => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [master -> {{ download_delegate if download_force_cache else inventory_hostname }}] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}

TASK [container-engine/runc : download_file | Get the list of working mirrors] ************************************************************************************************************************************ ok: [master] => {"ansible_facts": {"valid_mirror_urls": ["https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64"]}, "changed": false} ok: [py1] => {"ansible_facts": {"valid_mirror_urls": ["https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64"]}, "changed": false} ok: [hellbox] => {"ansible_facts": {"valid_mirror_urls": ["https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64"]}, "changed": false} ok: [py2] => {"ansible_facts": {"valid_mirror_urls": ["https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64"]}, "changed": false} ok: [py3] => {"ansible_facts": {"valid_mirror_urls": ["https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.arm64"]}, "changed": false}

TASK [container-engine/runc : download_file | Download item] ****************************************************************************************************************************************************** ok: [py3] => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py1] => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [py2] => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} ok: [master] => {"attempts": 1, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} FAILED - RETRYING: [hellbox]: download_file | Download item (4 retries left). FAILED - RETRYING: [hellbox]: download_file | Download item (3 retries left). FAILED - RETRYING: [hellbox]: download_file | Download item (2 retries left). FAILED - RETRYING: [hellbox]: download_file | Download item (1 retries left). fatal: [hellbox]: FAILED! => {"attempts": 4, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}

Pd: hellbox is AMD64 - Pyx and Master are ARM64 -

klap50 avatar Apr 27 '22 20:04 klap50

and this comand: declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)

image

give an error.

klap50 avatar Apr 27 '22 20:04 klap50

@klap50 kubespray at the moment doesn't quite support mixed architecture deployments and we have no coverage for such setups in CI so it is difficult to spot specific breakage.

cristicalin avatar Apr 28 '22 07:04 cristicalin

but in old versions of kubespray i can do it. now i dont know what upgrade in playbooks or something and not longer work :(

klap50 avatar Apr 28 '22 16:04 klap50

but in old versions of kubespray i can do it. now i dont know what upgrade in playbooks or something and not longer work :(

That could succeed accidentally. We already have the same duplicated issues:

  • https://github.com/kubernetes-sigs/kubespray/issues/7934
  • https://github.com/kubernetes-sigs/kubespray/issues/8461

oomichi avatar Apr 28 '22 21:04 oomichi

o thanks and sorry.

klap50 avatar Apr 29 '22 04:04 klap50

and this comand: declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)

image

give an error.

Just from looking at the highlighting you get, I can tell it's fish shell and not bash. Fish consideres () to be a wrapper for subcommands while bash would make it a list. As well as declare isn't a keyword in fish.

So that's why you get an error there :slightly_smiling_face:

etu avatar May 12 '22 12:05 etu

i detect the problem of the multiple arch all go Fine, Detect AMD and ARM and install runc, Containerd, Kubelete, etc. the problem is when pull pods, pull pods for ARM and no AMD or vice versa. Here start the issue:

TASK [download : debug] ******************************************************************************************************************************************************************************************* ok: [master] => { "msg": "Pull k8s.gcr.io/pause:3.3 required is: True" } ok: [pi1] => { "msg": "Pull k8s.gcr.io/pause:3.3 required is: True" } ok: [pi2] => { "msg": "Pull k8s.gcr.io/pause:3.3 required is: True" } ok: [pi3] => { "msg": "Pull k8s.gcr.io/pause:3.3 required is: True" } ok: [hellbox] => { "msg": "Pull k8s.gcr.io/pause:3.3 required is: True" } Friday 13 May 2022 04:52:14 -0300 (0:00:00.061) 0:04:21.781 ************ Friday 13 May 2022 04:52:14 -0300 (0:00:00.056) 0:04:21.837 ************ Friday 13 May 2022 04:52:15 -0300 (0:00:00.057) 0:04:21.895 ************ Friday 13 May 2022 04:52:15 -0300 (0:00:00.059) 0:04:21.954 ************ FAILED - RETRYING: download_container | Download image if required (4 retries left).

TASK [download_container | Download image if required] ************************************************************************************************************************************************************ changed: [pi1 -> pi1] changed: [pi2 -> pi2] changed: [pi3 -> pi3] changed: [master -> master] FAILED - RETRYING: download_container | Download image if required (3 retries left). FAILED - RETRYING: download_container | Download image if required (2 retries left). FAILED - RETRYING: download_container | Download image if required (1 retries left). fatal: [hellbox -> hellbox]: FAILED! => {"attempts": 4, "changed": true, "cmd": ["/usr/local/bin/nerdctl", "-n", "k8s.io", "pull", "--quiet", "k8s.gcr.io/pause:3.3"], "delta": "0:00:00.015483", "end": "2022-05-13 07:52:39.950763", "msg": "non-zero return code", "rc": 1, "start": "2022-05-13 07:52:39.935280", "stderr": "time="2022-05-13T07:52:39Z" level=fatal msg="cannot access containerd socket \"/var/run/containerd/containerd.sock\": no such file or directory"", "stderr_lines": ["time="2022-05-13T07:52:39Z" level=fatal msg="cannot access containerd socket \"/var/run/containerd/containerd.sock\": no such file or directory""], "stdout": "", "stdout_lines": []}

PD> hellbox is AMD64 and Pis ARM64.

klap50 avatar May 13 '22 14:05 klap50

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 11 '22 14:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 10 '22 14:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 10 '22 15:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 10 '22 15:10 k8s-ci-robot