kubespray
kubespray copied to clipboard
FAILED - RETRYING: ensure docker packages are installed
Hi all, I am trying to install kubernetes v1.26.2 using kubespray on RHEL 8.4
i am getting below error "FAILED - RETRYING: ensure docker packages are installed (1 retries left)."
Environment:
- Cloud provider or hardware configuration: VMware VMs
- OS: printf "$(uname -srm)\n$(cat /etc/os-release)\n" Linux 4.18.0-305.el8.x86_64 x86_64 NAME="Red Hat Enterprise Linux" VERSION="8.4 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.4" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8.4:GA" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/" BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.4 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.4"
-
ansible --version ansible [core 2.11.11] config file = /root/K8S_126/kubespray/ansible.cfg configured module search path = ['/root/K8S_126/kubespray/library'] ansible python module location = /usr/local/lib/python3.6/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.6.8 (default, Mar 18 2021, 08:58:41) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)] jinja version = 2.11.3 libyaml = True
-
git rev-parse --short HEAD d325fd6af
my configuration are as below and i am using container_manager as docker
git clone https://github.com/kubernetes-incubator/kubespray.git cd kubespray/ python3 -m pip install -r requirements-2.11.txt ansible --version cp -rfp inventory/sample inventory/mycluster declare -a IPS=(192.168.232.120 192.168.232.121) CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} cat inventory/mycluster/hosts.yaml sed -i 's/node1/aihub-master/g' inventory/mycluster/hosts.yaml sed -i 's/node2/aihub-node01/g' inventory/mycluster/hosts.yaml cat inventory/mycluster/hosts.yaml vim inventory/mycluster/hosts.yaml cat inventory/mycluster/hosts.yaml all: hosts: aihub-master: ansible_host: 192.168.232.120 ip: 192.168.232.120 access_ip: 192.168.232.120 aihub-node01: ansible_host: 192.168.232.121 ip: 192.168.232.121 access_ip: 192.168.232.121 children: kube_control_plane: hosts: aihub-master: kube_node: hosts: aihub-master: aihub-node01: etcd: hosts: aihub-master: k8s_cluster: children: kube_control_plane: kube_node: calico_rr: hosts: {}
vim inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml container_manager: docker kube_version: v1.26.2 kube_network_plugin: calico kube_pods_subnet: 10.233.64.0/18 kube_service_addresses: 10.233.0.0/18 vim inventory/mycluster/group_vars/k8s_cluster/addons.yml dashboard_enabled: true ingress_nginx_enabled: true ingress_nginx_host_network: true ansible all -i inventory/mycluster/hosts.yaml -m shell -a "sudo systemctl stop firewalld && sudo systemctl disable firewalld" ansible all -i inventory/mycluster/hosts.yaml -m shell -a "echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf" ansible all -i inventory/mycluster/hosts.yaml -m shell -a "sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab && sudo swapoff -a" ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
Please help on this. -- Zain
Does kubespray will support RHEL8 and above? i am not able to find redhat-8.yml file in "kubespray/roles/container-engine/docker/vars location"
Any suggessions..
Hi, so we do have CI on rocky 8/9 and it works so I am not sure what's up here and I am not sure there are any maintainer that have a way to test on redhat. IIRC there is already redhat 8.8 so you should maybe just be upgrading your distro?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.