kubespray
kubespray copied to clipboard
Create kubeadm token for joining nodes with 24h expiration (default)
[root@node1 kubespray]# cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
[root@node1 kubespray]# git branch
- (detached from origin/release-2.14) master
I tried to create a token manually, but found that the apiserver was not running in the background. The apiserver has been retrying. The default file I used directly has not been modified. I am not sure if it needs to be modified manually. What is the problem
I'm encountering the same problem. I haven't been able to get kubespray to work at all.
It's not attempting to install or start any of the control plane services, kube-apiserver, kube-controller-manager, etc. and so there's nothing for kubeadm to connect to to create tokens. Working from the v2.19.0 tag.
PLAY RECAP ***************************************************************************************************************************************************************************
node1 : ok=769 changed=7 unreachable=0 failed=1 skipped=937 rescued=0 ignored=3
node2 : ok=725 changed=9 unreachable=0 failed=1 skipped=810 rescued=0 ignored=3
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node3 : ok=725 changed=9 unreachable=0 failed=1 skipped=810 rescued=0 ignored=3
Saturday 02 July 2022 22:35:29 +0000 (0:07:57.170) 0:16:21.716 *********
===============================================================================
kubernetes/control-plane : Create kubeadm token for joining nodes with 24h expiration (default) ----------------------------------------------------------------------------- 477.17s
download : download_file | Validate mirrors ---------------------------------------------------------------------------------------------------------------------------------- 23.35s
etcd : Gen_certs | Write etcd member and admin certs to other etcd nodes ------------------------------------------------------------------------------------------------------ 8.56s
etcd : Gen_certs | Write etcd member and admin certs to other etcd nodes ------------------------------------------------------------------------------------------------------ 8.47s
etcd : Gen_certs | run cert generation script --------------------------------------------------------------------------------------------------------------------------------- 5.91s
etcd : Gen_certs | run cert generation script --------------------------------------------------------------------------------------------------------------------------------- 5.62s
network_plugin/calico : Get current calico cluster version -------------------------------------------------------------------------------------------------------------------- 4.20s
container-engine/validate-container-engine : Populate service facts ----------------------------------------------------------------------------------------------------------- 3.93s
etcd : Gen_certs | Gather etcd member and admin certs from first etcd node ---------------------------------------------------------------------------------------------------- 3.86s
etcd : Gen_certs | Write node certs to other etcd nodes ----------------------------------------------------------------------------------------------------------------------- 3.85s
etcd : Gen_certs | Write node certs to other etcd nodes ----------------------------------------------------------------------------------------------------------------------- 3.73s
etcd : Gen_certs | Gather etcd member and admin certs from first etcd node ---------------------------------------------------------------------------------------------------- 3.53s
download : download | Download files / images --------------------------------------------------------------------------------------------------------------------------------- 3.28s
container-engine/crictl : extract_file | Unpacking archive -------------------------------------------------------------------------------------------------------------------- 3.13s
container-engine/containerd : containerd | Unpack containerd archive ---------------------------------------------------------------------------------------------------------- 3.09s
container-engine/nerdctl : extract_file | Unpacking archive ------------------------------------------------------------------------------------------------------------------- 2.94s
container-engine/nerdctl : extract_file | Unpacking archive ------------------------------------------------------------------------------------------------------------------- 2.88s
kubernetes/preinstall : Create kubernetes directories ------------------------------------------------------------------------------------------------------------------------- 2.60s
kubernetes/preinstall : Ensure kube-bench parameters are set ------------------------------------------------------------------------------------------------------------------ 2.52s
download : download_file | Validate mirrors ----------------------------------------------------------------------------------------------------------------------------------- 2.42s
release-2.14 seems very old at this time, please try newer version like 2.19 instead if possible.
Since the commit 05dc2b3a097fda2ffff7a77f4ca843d0e41dec76 the condition
when: kubeadm_token is not defined
has been added to the task Create kubeadm token for joining nodes with 24h expiration (default).
So if creating a token manually and specifying it as kubeadm_token, the task should be skipped.
Then I feel it is worth to try a newer version which includes the above commit.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.