kubeadm icon indicating copy to clipboard operation
kubeadm copied to clipboard

make components on control-plane nodes point to the local API server endpoint

Open neolit123 opened this issue 5 years ago • 61 comments

in CAPI immutable upgrades we saw a problem where a 1.19 joining node cannot bootstrap, if a 1.19 KCM takes leadership and tries to send a CSR to a 1.18 API server on an existing Node. this happens because in 1.19 the CSR API graduated to v1 and a KCM is supposed to talk to a N or N+1 API server only.

a better explanation here: https://kubernetes.slack.com/archives/C8TSNPY4T/p1598907959059100?thread_ts=1598899864.038100&cid=C8TSNPY4T

  • [x] we should make the controller-manager.conf and scheduler.conf that kubeadm generates talk to the local API server and not to the controlPlaneEndpoint (CPE, e.g. LB). PR for 1.20: https://github.com/kubernetes/kubernetes/pull/94398 PR for 1.19: https://github.com/kubernetes/kubernetes/pull/94442
  • [x] relax the server URL validation in kubeconfig files: https://github.com/kubernetes/kubeadm/issues/2271#issuecomment-690822335 https://github.com/kubernetes/kubernetes/pull/94816

optionally we should see if we can make the kubelet on control-plane Nodes bootstrap via the local API server instead of using the CPE. this might be a bit tricky and needs investigation. we could at least post-fix the kubelet.conf to point to the local API server after the bootstrap has finished. see https://github.com/kubernetes/kubernetes/issues/80774 for a related discussion

this change requires a more detailed plan, a feature gate and a KEP

list of PRs tracked here:

  • https://github.com/kubernetes/enhancements/issues/4471

1.31 alpha

1.33 beta

1.xx GA - TODO

neolit123 avatar Aug 31 '20 21:08 neolit123

first PR is here: https://github.com/kubernetes/kubernetes/pull/94398

neolit123 avatar Sep 01 '20 15:09 neolit123

we spoke about the kubelet.conf in the office hours today:

  • Pointing the kubelet to the local api server should work, but the kubelet-start phase has to happen after the control-plane manifests are written on disk for CP nodes.
  • Requires phase reorder and we are considering using a feature gate.
  • This avoids skew problems of a new kubelet trying to bootstrap against an old api-server.
  • One less component to point to the CPE.

i'm going to experiment and see how it goes, but this cannot be backported to older releases as it is a breaking change to phase users.

neolit123 avatar Sep 02 '20 17:09 neolit123

This breaks the rules, the controlPlaneEndpoint maybe a domain, if this is a domain, so it will not run ok after your code

zhangguanzhang avatar Sep 10 '20 07:09 zhangguanzhang

This breaks the rules, the controlPlaneEndpoint maybe a domain, if this is a domain, so it will not run ok after your code

can you clarify with examples?

neolit123 avatar Sep 10 '20 13:09 neolit123

@jdef added a note that that some comments were left invalid after the recent change: https://github.com/kubernetes/kubernetes/pull/94398/files/d9441906c4155173ce1a75421d8fcd1d2f79c471#r486252360

this should be fixed in master.

neolit123 avatar Sep 10 '20 13:09 neolit123

some else added a comment on https://github.com/kubernetes/kubernetes/pull/94398 but later deleted it:

when use method CreateJoinControlPlaneKubeConfigFiles with controlPlaneEndpoint like apiserver.cluster.local to generate config files. and use kubeadm init --config=/root/kubeadm-config.yaml --upload-certs -v 5
the error occurs like

I0910 15:15:54.436430   52511 kubeconfig.go:84] creating kubeconfig file for controller-manager.conf
currentConfig.Clusters[currentCluster].Server:  https://apiserver.cluster.local:6443 
config.Clusters[expectedCluster].Server:  https://192.168.160.243:6443
a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has got the wrong API Server URL

this validation should be turned into a warning instead of an error. then components would fail if they don't point to a valid API server, so the user would know.

neolit123 avatar Sep 10 '20 13:09 neolit123

This breaks the rules, the controlPlaneEndpoint maybe a domain, if this is a domain, so it will not run ok after your code

can you clarify with examples?

you could see this doc https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node

--control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"

zhangguanzhang avatar Sep 10 '20 13:09 zhangguanzhang

i do know about that doc. are you saying that using "DNS-name:port" is completely broken now for you? what error output are you seeing? i did test this during my work on the changes and it worked fine.

neolit123 avatar Sep 10 '20 13:09 neolit123

some else added a comment on kubernetes/kubernetes#94398 but later deleted it:

when use method CreateJoinControlPlaneKubeConfigFiles with controlPlaneEndpoint like apiserver.cluster.local to generate config files. and use kubeadm init --config=/root/kubeadm-config.yaml --upload-certs -v 5
the error occurs like

I0910 15:15:54.436430   52511 kubeconfig.go:84] creating kubeconfig file for controller-manager.conf
currentConfig.Clusters[currentCluster].Server:  https://apiserver.cluster.local:6443 
config.Clusters[expectedCluster].Server:  https://192.168.160.243:6443
a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has got the wrong API Server URL

this validation should be turned into a warning instead of an error. then components would fail if they don't point to a valid API server, so the user would know.

yes, please. this just bit us when testing a workaround in a pre-1.19.1 cluster whereby we tried manually updating clusters[].cluster.server in (scheduler, controller-manager .conf) to point to localhost instead of the official controlplane endpoint.

jdef avatar Sep 10 '20 13:09 jdef

i do know about that doc. are you saying that using "DNS-name:port" is completely broken now for you?

yes, if you want to deploy a HA cluster, it is best to set controlPlaneEndpoint to the LOAD_BALANCER_DNS instead of LOAD_BALANCER ip

zhangguanzhang avatar Sep 10 '20 13:09 zhangguanzhang

yes, if you want to deploy a HA cluster, it is best to set controlPlaneEndpoint to the LOAD_BALANCER_DNS instead of LOAD_BALANCER ip

what error are you getting?

neolit123 avatar Sep 10 '20 13:09 neolit123

yes, if you want to deploy a HA cluster, it is best to set controlPlaneEndpoint to the LOAD_BALANCER_DNS instead of LOAD_BALANCER ip

what error are you getting?

I add some code for the log print, this is the error

I0910 13:14:53.017570   21006 kubeconfig.go:84] creating kubeconfig file for controller-manager.conf
currentConfig.Clusters https://apiserver.cluster.local:6443 
config.Clusters:  https://192.168.160.243:6443
error execution phase kubeconfig/controller-manager: a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has got the wrong API Server URL

zhangguanzhang avatar Sep 10 '20 13:09 zhangguanzhang

ok, so you have the same error as the user reporting above.

we can fix this for 1.19.2

one workaround is:

  • start kubeadm "init" with kubeconfig files using the local endpoint (instead of control-plane-endpoint)
  • wait for init to finish
  • modify the kubeconfig files again
  • restart the kube-scheduler and kube-controller-manager

neolit123 avatar Sep 10 '20 13:09 neolit123

Both kube-scheduler and kube-controller-manager can use localhost and loadblance to connect to kube-apiserver, but users cannot be forced to use localhost, and warnning can be used instead of error

zhangguanzhang avatar Sep 10 '20 13:09 zhangguanzhang

@neolit123 I'm +1 to relax the checks on the address in the existing kubeconfig file. We can either remove the check or make it more flexible by checking if the address is either CPE or LAPI

fabriziopandini avatar Sep 10 '20 14:09 fabriziopandini

@neolit123 here is the example. i just edit to add log print. https://github.com/neolit123/kubernetes/blob/d9441906c4155173ce1a75421d8fcd1d2f79c471/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go#L225

fmt.Println("currentConfig.Clusters[currentCluster].Server:", currentConfig.Clusters[currentCluster].Server, "\nconfig.Clusters[expectedCluster].Server: ", config.Clusters[expectedCluster].Server)

use method CreateJoinControlPlaneKubeConfigFiles with controlPlaneEndpoint to genrate kube-schedulerand kube-controller-manager , in this situation , set controlPlaneEndpoint as LOAD_BALANCER_DNS:LOAD_BALANCER_PORT . it is best to set LOAD_BALANCER_DNS instead of IP. then to run kubeadm init with LOAD_BALANCER_DNS:LOAD_BALANCER_PORT. the result is.

./kubeadm  init  --control-plane-endpoint  apiserver.cluster.local:6443
W0911 09:36:17.922135   63517 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.1
[preflight] Running pre-flight checks
	[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
currentConfig.Clusters[currentCluster].Server: https://apiserver.cluster.local:6443 
config.Clusters[expectedCluster].Server:  https://192.168.160.243:6443
error execution phase kubeconfig/controller-manager: a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has got the wrong API Server URL
To see the stack trace of this error execute with --v=5 or higher

oldthreefeng avatar Sep 11 '20 01:09 oldthreefeng

i will send the PR in the next couple of days. edit: https://github.com/kubernetes/kubernetes/pull/94816

neolit123 avatar Sep 14 '20 14:09 neolit123

fix for 1.19.2 is here: https://github.com/kubernetes/kubernetes/pull/94890

neolit123 avatar Sep 18 '20 12:09 neolit123

to further summarize what is happening. after the changes above, kubeadm will no longer error out if the server URL in custom provided kubeconfig files does not match the expected one. it will only show a warning.

example:

  • you have something like foo:6443 in scheduler.conf
  • kubeadm expects scheduler.conf to point to e.g. 192.168.0.108:6443 (local api server endpoint)
  • kubeadm will show you a warning when reading the provided kubeconfig file.
  • this allows you to modify the topology of your control-plane components, but you need to make sure the components work after such a customization.

neolit123 avatar Sep 18 '20 13:09 neolit123

fix for 1.19.2 is here: kubernetes/kubernetes#94890

1.19.2 is already out. So this fix will target 1.19.3, yes?

jdef avatar Sep 18 '20 14:09 jdef

Indeed, they pushed it out 2 days ago. Should be out with 1.19.3 then.

neolit123 avatar Sep 18 '20 16:09 neolit123

@neolit123 This issue came up as I'm working on graduating the EndpointSlice API to GA (https://github.com/kubernetes/kubernetes/pull/96318). I'm trying to determine if it's safe to also upgrade consumers like kube-proxy or kube-controller-manager to also use the v1 API in the same release. If I'm understanding this issue correctly, making that change in upstream could potentially result in issues here when version skew exists. Do you think this will be resolved in time for the 1.20 release cycle?

robscott avatar Nov 09 '20 18:11 robscott

@robscott i will comment on https://github.com/kubernetes/kubernetes/pull/96318

neolit123 avatar Nov 09 '20 18:11 neolit123

/remove-kind bug /kind feature design

neolit123 avatar Jan 27 '21 15:01 neolit123

re:

optionally we should see if we can make the kubelet on control-plane Nodes bootstrap via the local API server instead of using the CPE. this might be a bit tricky and needs investigation. we could at least post-fix the kubelet.conf to point to the local API server after the bootstrap has finished.

i experimented with this and couldn't get it to work under normal conditions with a patched kubeadm binary.

procedure:

  • create primary CP node

on the second CP node (note: phases are re-ordered here, compared to non-patched kubeadm):

  • prepare the static pods
  • add etcd member
  • write a bootstrap-kubelet.conf that points to local second node API server
  • start kubelet (starts static pods too)

TLS bootstrap fails and the kubelet reports a 400 and a:

Unexpected error when reading response body

Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: I0201 23:16:45.515999    1477 certificate_manager.go:412] Rotating certificates
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: I0201 23:16:45.519288    1477 request.go:1105] Request Body:
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000000  6b 38 73 00 0a 33 0a 16  63 65 72 74 69 66 69 63  |k8s..3..certific|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000010  61 74 65 73 2e 6b 38 73  2e 69 6f 2f 76 31 12 19  |ates.k8s.io/v1..|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000020  43 65 72 74 69 66 69 63  61 74 65 53 69 67 6e 69  |CertificateSigni|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000030  6e 67 52 65 71 75 65 73  74 12 b3 04 0a 16 0a 00  |ngRequest.......|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000040  12 04 63 73 72 2d 1a 00  22 00 2a 00 32 00 38 00  |..csr-..".*.2.8.|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000050  42 00 7a 00 12 96 04 0a  b0 03 2d 2d 2d 2d 2d 42  |B.z.......-----B|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000060  45 47 49 4e 20 43 45 52  54 49 46 49 43 41 54 45  |EGIN CERTIFICATE|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000070  20 52 45 51 55 45 53 54  2d 2d 2d 2d 2d 0a 4d 49  | REQUEST-----.MI|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000080  49 42 42 7a 43 42 72 67  49 42 41 44 42 4d 4d 52  |IBBzCBrgIBADBMMR|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000090  55 77 45 77 59 44 56 51  51 4b 45 77 78 7a 65 58  |UwEwYDVQQKEwxzeX|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000a0  4e 30 5a 57 30 36 62 6d  39 6b 5a 58 4d 78 4d 7a  |N0ZW06bm9kZXMxMz|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000b0  41 78 42 67 4e 56 42 41  4d 54 4b 6e 4e 35 0a 63  |AxBgNVBAMTKnN5.c|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000c0  33 52 6c 62 54 70 75 62  32 52 6c 4f 6d 74 70 62  |3RlbTpub2RlOmtpb|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000d0  6d 52 6c 63 69 31 79 5a  57 64 31 62 47 46 79 4c  |mRlci1yZWd1bGFyL|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000e0  57 4e 76 62 6e 52 79 62  32 77 74 63 47 78 68 62  |WNvbnRyb2wtcGxhb|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000000f0  6d 55 74 4d 6a 42 5a 4d  42 4d 47 42 79 71 47 0a  |mUtMjBZMBMGByqG.|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000100  53 4d 34 39 41 67 45 47  43 43 71 47 53 4d 34 39  |SM49AgEGCCqGSM49|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000110  41 77 45 48 41 30 49 41  42 4a 79 42 30 56 53 70  |AwEHA0IABJyB0VSp|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000120  41 78 6e 57 45 50 2f 64  68 6d 76 4f 72 69 47 4c  |AxnWEP/dhmvOriGL|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000130  59 39 64 31 62 4e 69 70  72 46 77 63 4a 76 71 6e  |Y9d1bNiprFwcJvqn|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000140  0a 45 45 38 43 42 72 56  77 61 47 6f 34 34 66 61  |.EE8CBrVwaGo44fa|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000150  48 48 48 48 34 48 54 57  79 33 4b 42 65 62 31 70  |HHHH4HTWy3KBeb1p|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000160  35 6c 49 78 54 62 6a 62  6e 2f 2f 52 4d 32 69 53  |5lIxTbjbn//RM2iS|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000170  67 41 44 41 4b 42 67 67  71 68 6b 6a 4f 50 51 51  |gADAKBggqhkjOPQQ|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000180  44 0a 41 67 4e 49 41 44  42 46 41 69 45 41 31 55  |D.AgNIADBFAiEA1U|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000190  49 58 59 7a 76 6e 38 79  71 31 65 47 41 2f 66 46  |IXYzvn8yq1eGA/fF|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001a0  64 76 74 6c 2f 76 73 39  6d 66 62 62 65 35 31 54  |dvtl/vs9mfbbe51T|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001b0  71 45 58 48 76 32 76 2b  34 43 49 47 59 4c 59 35  |qEXHv2v+4CIGYLY5|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001c0  57 47 0a 4d 72 64 63 66  71 41 2f 58 43 75 67 6c  |WG.MrdcfqA/XCugl|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001d0  54 34 76 58 47 51 57 61  74 6f 54 74 56 4d 73 57  |T4vXGQWatoTtVMsW|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001e0  68 69 72 58 77 62 68 0a  2d 2d 2d 2d 2d 45 4e 44  |hirXwbh.-----END|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 000001f0  20 43 45 52 54 49 46 49  43 41 54 45 20 52 45 51  | CERTIFICATE REQ|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000200  55 45 53 54 2d 2d 2d 2d  2d 0a 12 00 1a 00 2a 11  |UEST-----.....*.|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000210  64 69 67 69 74 61 6c 20  73 69 67 6e 61 74 75 72  |digital signatur|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000220  65 2a 10 6b 65 79 20 65  6e 63 69 70 68 65 72 6d  |e*.key encipherm|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000230  65 6e 74 2a 0b 63 6c 69  65 6e 74 20 61 75 74 68  |ent*.client auth|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000240  3a 2b 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |:+kubernetes.io/|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000250  6b 75 62 65 2d 61 70 69  73 65 72 76 65 72 2d 63  |kube-apiserver-c|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000260  6c 69 65 6e 74 2d 6b 75  62 65 6c 65 74 1a 00 1a  |lient-kubelet...|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: 00000270  00 22 00                                          |.".|
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: I0201 23:16:45.519355    1477 round_trippers.go:425] curl -k -v -XPOST  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "Content-Type: application/vnd.kubernetes.protobuf" -H "User-Agent: kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19" 'http://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests'
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: I0201 23:16:45.519720    1477 round_trippers.go:445] POST http://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests 400 Bad Request in 0 milliseconds
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: I0201 23:16:45.519728    1477 round_trippers.go:451] Response Headers:
Feb 01 23:16:45 kinder-regular-control-plane-2 kubelet[1477]: E0201 23:16:45.519830    1477 request.go:1011] Unexpected error when reading response body: read tcp 127.0.0.1:33144->127.0.0.1:6443: read: connection reset by peer

kubelet client certs are never written in /var/lib/kubelet/pki/.

alternatively if the second CP node signs it's own kubelet client certificates (since it has the ca.key) with disabled rotation the Node object ends up being created properly, but this sort of defeats the bootstrap token method for joining CP nodes and means one can just join using the "--certificate-key" that fetches the CA.

the static pods on the second node are running fine. etcd cluster looks healthy. i do not see anything interesting in the server and KCM logs, but i wonder if this somehow due to leader election.

neolit123 avatar Feb 01 '21 23:02 neolit123

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar May 09 '21 19:05 fejta-bot

/remove-lifecycle stale

fabriziopandini avatar May 10 '21 12:05 fabriziopandini

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 16 '21 20:08 k8s-triage-robot

from my POV the last item in the TODOs here is not easily doable. summary above. if someone wants to investigate this further, please go ahead.

neolit123 avatar Aug 16 '21 20:08 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 14 '21 21:11 k8s-triage-robot