kubespray
kubespray copied to clipboard
kubespray v2.24.1 cannot bootstrap in Oracle Linux 7
What happened?
Saturday 27 April 2024 04:21:32 +0000 (0:00:01.107) 0:00:04.595 ********
TASK [bootstrap-os : Enable Centos extra repo for Oracle Linux] ********************************************************************************************** fatal: [cp1]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [cp2]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [cp3]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [worker1]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [worker2]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"}
What did you expect to happen?
no error
How can we reproduce it (as minimally and precisely as possible)?
oracle linux 7 with latest upgrade/update
OS
Linux 5.4.17-2136.330.7.1.el7uek.x86_64 x86_64 NAME="Oracle Linux Server" VERSION="7.9" ID="ol" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.9" PRETTY_NAME="Oracle Linux Server 7.9" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:oracle:linux:7:9:server" HOME_URL="https://linux.oracle.com/" BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7" ORACLE_BUGZILLA_PRODUCT_VERSION=7.9 ORACLE_SUPPORT_PRODUCT="Oracle Linux" ORACLE_SUPPORT_PRODUCT_VERSION=7.9
Version of Ansible
ansible in docker
Version of Python
Python 2.7.5 (in k8s node). already tried to upgrade to Python 3.6.8
Version of Kubespray (commit)
538deff
Network plugin used
calico
Full inventory with variables
[all] cp1 ansible_host=192.168.2.11 cp2 ansible_host=192.168.2.12 cp3 ansible_host=192.168.2.13 worker1 ansible_host=192.168.2.21 worker2 ansible_host=192.168.2.22
[kube_control_plane] cp1 cp2 cp3 [etcd] cp1 cp2 cp3 [kube_node] worker1 worker2
[calico_rr]
[k8s_cluster:children] kube_control_plane kube_node calico_rr
Command used to invoke ansible
ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
Output of ansible run
-------bla blo bla blo ----------
TASK [bootstrap-os : Enable Centos extra repo for Oracle Linux] ********************************************************************************************** fatal: [cp1]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [cp2]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [cp3]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [worker1]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"} fatal: [worker2]: FAILED! => {"msg": "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"}
Anything else we need to know
No response
Could you check if #11162 fix this ? The shadowing could lead to weird behavior, even if I don't see how it would be related in that particular case, better to check.
no, it didnt help. i already tried to use lower version of kubespray, and dont know why only 2.20.0 work with oracle linux 7. 2.20+ not work, always give me "'ansible_architecture' is undefined. 'ansible_architecture' is undefined"
Could you check if #11162 fix this ? The shadowing could lead to weird behavior, even if I don't see how it would be related in that particular case, better to check.
I would like to work on it . Can you please assign me this issue.
FYI, you can self-assign issue by using the /assign prow command in a comment (I think it has to be on its own line)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten