kubespray
kubespray copied to clipboard
In release2.19, install fail in centos7 arm64
Environment:
- Cloud provider or hardware configuration: Lenovo SR650
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Linux 4.18.0-348.20.1.el7.aarch64 aarch64 NAME="CentOS Linux" VERSION="7 (AltArch)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (AltArch)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7:server" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.9.2009 (AltArch)
-
Version of Ansible (
ansible --version
): ansible==3.4.0 ansible-base==2.10.11 cryptography==2.8 jinja2==2.11.3 netaddr==0.7.19 pbr==5.4.4 jmespath==0.9.5 ruamel.yaml==0.16.10 ruamel.yaml.clib==0.2.2 MarkupSafe==1.1.1 -
Version of Python (
python --version
): Python 3.6.8
Kubespray version (commit) (git rev-parse --short HEAD
):
--release-2.19
Network plugin used: calico
Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):
Command used to invoke ansible:
Output of ansible run:
firstly, kubeadm init fail with the following fault info: TASK [kubernetes/control-plane : kubeadm | Initialize first master] ************ fatal: [node1]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["timeout", "-k", "300s", "300s", "/usr/local/bin/kubeadm", "init", "--config=/etc/kubernetes/kubeadm-config.yaml", "--ignore-preflight-errors=all", "--skip-phases=addon/coredns", "--upload-certs"], "delta": "0:05:00.003717", "end": "2022-07-12 13:17:23.713737", "failed_when_result": true, "msg": "non-zero return code", "rc": 124, "start": "2022-07-12 13:12:23.710020", "stderr": "W0712 13:12:23.742451 6621 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [10.233.0.3]\n\t[WARNING Port-10259]: Port 10259 is in use\n\t[WARNING Port-10257]: Port 10257 is in use\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists\n\t[WARNING Port-10250]: Port 10250 is in use", "stderr_lines": ["W0712 13:12:23.742451 6621 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [10.233.0.3]", "\t[WARNING Port-10259]: Port 10259 is in use", "\t[WARNING Port-10257]: Port 10257 is in use", "\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists", "\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists", "\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists", "\t[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists", "\t[WARNING Port-10250]: Port 10250 is in use"], "stdout": "[init] Using Kubernetes version: v1.23.7\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder "/etc/kubernetes/ssl"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing "sa" key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"\n[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"\n[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"\n[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 5m0s\n[kubelet-check] Initial timeout of 40s passed.", "stdout_lines": ["[init] Using Kubernetes version: v1.23.7", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "[certs] Using certificateDir folder "/etc/kubernetes/ssl"", "[certs] Using existing ca certificate authority", "[certs] Using existing apiserver certificate and key on disk", "[certs] Using existing apiserver-kubelet-client certificate and key on disk", "[certs] Using existing front-proxy-ca certificate authority", "[certs] Using existing front-proxy-client certificate and key on disk", "[certs] Using existing etcd/ca certificate authority", "[certs] Using existing etcd/server certificate and key on disk", "[certs] Using existing etcd/peer certificate and key on disk", "[certs] Using existing etcd/healthcheck-client certificate and key on disk", "[certs] Using existing apiserver-etcd-client certificate and key on disk", "[certs] Using the existing "sa" key", "[kubeconfig] Using kubeconfig folder "/etc/kubernetes"", "[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"", "[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"", "[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"", "[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"", "[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"", "[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"", "[kubelet-start] Starting the kubelet", "[control-plane] Using manifest folder "/etc/kubernetes/manifests"", "[control-plane] Creating static Pod manifest for "kube-apiserver"", "[control-plane] Creating static Pod manifest for "kube-controller-manager"", "[control-plane] Creating static Pod manifest for "kube-scheduler"", "[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"", "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 5m0s", "[kubelet-check] Initial timeout of 40s passed."]}
secondly, i check journalctl cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net
thirdly, i check docker images in master node then found out that the following two arm64 images are not arm64: coredns/coredns v1.8.6 edaa71f2aee8 cpa/cluster-proportional-autoscaler-arm64 1.8.5 1e25a96f93b4 they are amd64 arch docker imaegs.
fourth, i commit issues to coredns and autoscaler: https://github.com/coredns/coredns/issues/5507 https://github.com/kubernetes-sigs/cluster-proportional-autoscaler/issues/123
and i hope kubespray in arm64 platform can replace with other correct arm64 image tag.
Anything else do we need to know:
I can collaborate this issue as well on arm64 OCI Instances. Deployment fails on Initialize first master
and systemctl shows that kubelet is restarting over and over again. Listing the images with nerdctl images
shows that the labeled autoscaler image -arm64
is actually for amd64
. Btw. the previous version k8s.gcr.io/cpa/cluster-proportional-autoscaler-arm64:1.8.4
is also build for amd64
.
TASK [kubernetes/control-plane : kubeadm | Initialize first master] **************************************************************************************************************************
task path: /kubespray/roles/kubernetes/control-plane/tasks/kubeadm-setup.yml:152
fatal: [node-jakob]: FAILED! => {
"attempts": 3,
"changed": true,
"cmd": [
"timeout",
"-k",
"300s",
"300s",
"/usr/local/bin/kubeadm",
"init",
"--config=/etc/kubernetes/kubeadm-config.yaml",
"--ignore-preflight-errors=all",
"--skip-phases=addon/coredns",
"--upload-certs"
],
"delta": "0:01:57.188168",
"end": "2022-07-13 08:46:08.300888",
"failed_when_result": true,
"invocation": {
"module_args": {
"_raw_params": "timeout -k 300s 300s /usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --skip-phases=addon/coredns --upload-certs",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-07-13 08:44:11.112720",
"stderr": "W0713 08:44:11.140926 22556 common.go:83] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta2\". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.\nW0713 08:44:11.141832 22556 common.go:83] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta2\". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.\nW0713 08:44:11.143694 22556 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme \"unix\" to the \"criSocket\" with value \"/var/run/containerd/containerd.sock\". Please update your configuration!\nW0713 08:44:11.143731 22556 utils.go:69] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists\n\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher",
"stderr_lines": [
"W0713 08:44:11.140926 22556 common.go:83] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta2\". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.",
"W0713 08:44:11.141832 22556 common.go:83] your configuration file uses a deprecated API spec: \"kubeadm.k8s.io/v1beta2\". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.",
"W0713 08:44:11.143694 22556 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme \"unix\" to the \"criSocket\" with value \"/var/run/containerd/containerd.sock\". Please update your configuration!",
"W0713 08:44:11.143731 22556 utils.go:69] The recommended value for \"clusterDNS\" in \"KubeletConfiguration\" is: [10.233.0.10]; the provided value is: [169.254.25.10]",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists",
"\t[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists",
"error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster",
"To see the stack trace of this error execute with --v=5 or higher"
],
"stdout": "[init] Using Kubernetes version: v1.24.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] External etcd mode: Skipping etcd/ca certificate authority generation\n[certs] External etcd mode: Skipping etcd/server certificate generation\n[certs] External etcd mode: Skipping etcd/peer certificate generation\n[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation\n[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/admin.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/kubelet.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/controller-manager.conf\"\n[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/scheduler.conf\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'",
"stdout_lines": [
"[init] Using Kubernetes version: v1.24.1",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[certs] Using certificateDir folder \"/etc/kubernetes/ssl\"",
"[certs] Using existing ca certificate authority",
"[certs] Using existing apiserver certificate and key on disk",
"[certs] Using existing apiserver-kubelet-client certificate and key on disk",
"[certs] Using existing front-proxy-ca certificate authority",
"[certs] Using existing front-proxy-client certificate and key on disk",
"[certs] External etcd mode: Skipping etcd/ca certificate authority generation",
"[certs] External etcd mode: Skipping etcd/server certificate generation",
"[certs] External etcd mode: Skipping etcd/peer certificate generation",
"[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation",
"[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation",
"[certs] Using the existing \"sa\" key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/admin.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/kubelet.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/controller-manager.conf\"",
"[kubeconfig] Using existing kubeconfig file: \"/etc/kubernetes/scheduler.conf\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Starting the kubelet",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 5m0s",
"[kubelet-check] Initial timeout of 40s passed.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.",
"[kubelet-check] It seems like the kubelet isn't running or healthy.",
"[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get \"http://localhost:10248/healthz\": dial tcp 127.0.0.1:10248: connect: connection refused.",
"",
"Unfortunately, an error has occurred:",
"\ttimed out waiting for the condition",
"",
"This error is likely caused by:",
"\t- The kubelet is not running",
"\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)",
"",
"If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:",
"\t- 'systemctl status kubelet'",
"\t- 'journalctl -xeu kubelet'",
"",
"Additionally, a control plane component may have crashed or exited when started by the container runtime.",
"To troubleshoot, list all containers using your preferred container runtimes CLI.",
"Here is one example how you may list all running Kubernetes containers by using crictl:",
"\t- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'",
"\tOnce you have found the failing container, you can inspect its logs with:",
"\t- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'"
]
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.