kops
kops copied to clipboard
[Hetzner] Generates the servers but does not set up the cluster
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
Client version: 1.27.0
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
Client Version: v1.27.4
3. What cloud provider are you using? Hetzner
4. What commands did you run? What is the simplest way to reproduce this issue?
export KOPS_STATE_STORE=s3://XXXXXXXXXXXXX
export HCLOUD_TOKEN=XXXXXXXXXXXXX
kops create cluster --name=test.example.k8s.local \
--ssh-public-key=~/.ssh/hetzner.pub --cloud=hetzner --zones=fsn1 \
--image=ubuntu-20.04 --networking=calico --network-cidr=10.10.0.0/16 --kubernetes-version 1.26.7
kops update cluster --name test.example.k8s.local --yes --admin
5. What happened after the commands executed? I0820 19:11:37.839489 4034 executor.go:111] Tasks: 0 done / 47 total; 38 can run W0820 19:11:38.145388 4034 vfs_keystorereader.go:143] CA private key was not found I0820 19:11:38.181567 4034 keypair.go:226] Issuing new certificate: "etcd-manager-ca-main" I0820 19:11:38.181938 4034 keypair.go:226] Issuing new certificate: "apiserver-aggregator-ca" I0820 19:11:38.187196 4034 keypair.go:226] Issuing new certificate: "etcd-manager-ca-events" I0820 19:11:38.193928 4034 keypair.go:226] Issuing new certificate: "etcd-peers-ca-events" I0820 19:11:38.215820 4034 keypair.go:226] Issuing new certificate: "etcd-clients-ca" I0820 19:11:38.218170 4034 keypair.go:226] Issuing new certificate: "etcd-peers-ca-main" W0820 19:11:38.225427 4034 vfs_keystorereader.go:143] CA private key was not found I0820 19:11:38.264225 4034 keypair.go:226] Issuing new certificate: "kubernetes-ca" I0820 19:11:38.274655 4034 keypair.go:226] Issuing new certificate: "service-account" I0820 19:11:39.038562 4034 executor.go:111] Tasks: 38 done / 47 total; 3 can run I0820 19:11:40.312737 4034 executor.go:111] Tasks: 41 done / 47 total; 2 can run I0820 19:11:40.787769 4034 executor.go:111] Tasks: 43 done / 47 total; 4 can run I0820 19:11:41.810296 4034 executor.go:111] Tasks: 47 done / 47 total; 0 can run I0820 19:11:41.834529 4034 update_cluster.go:323] Exporting kubeconfig for cluster kOps has set your kubectl context to test.example.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
- validate cluster: kops validate cluster --wait 10m
- list nodes: kubectl get nodes --show-labels
- ssh to a control-plane node: ssh -i ~/.ssh/id_rsa ubuntu@
- the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
- read about installing addons at: https://kops.sigs.k8s.io/addons.
Resources are generated in Hetzner, a control-plane, a node, 2 volumes etcd unattached, a load balancer pointing to the control plane but unhealthy, a network, and 2 firewalls, one for the nodes and the other for the control plane. But not get to create the cluster or anything like that.
6. What did you expect to happen? Create a Kubernetes cluster
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2023-08-20T17:35:49Z"
name: test.example.k8s.local
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: hetzner
configBase: s3://XXXXXXXXXXXXX/test.example.k8s.local
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: control-plane-fsn1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- instanceGroup: control-plane-fsn1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: 1.26.7
networkCIDR: 10.10.0.0/16
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- name: fsn1
type: Public
zone: fsn1
topology:
dns:
type: None
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2023-08-20T17:35:49Z"
labels:
kops.k8s.io/cluster: test.example.k8s.local
name: control-plane-fsn1
spec:
image: ubuntu-20.04
machineType: cx21
maxSize: 1
minSize: 1
role: Master
subnets:
- fsn1
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2023-08-20T17:35:50Z"
labels:
kops.k8s.io/cluster: test.example.k8s.local
name: nodes-fsn1
spec:
image: ubuntu-20.04
machineType: cx21
maxSize: 1
minSize: 1
role: Node
subnets:
- fsn1
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
https://gist.github.com/kespineira/33a1f984674ef86baa92db87fb7c4f77
9. Anything else do we need to know? No. Thanks in advance
Hi @kespineira. Thanks for reporting this.
Could you try to SSH to the control-plane server and check the kops-configuration
and kubelet
service logs with journalctl
. Also please check the etcd
logs in /var/log
.
These should give you a general idea about why the cluster fails to start.
Hi @kespineira. Thanks for reporting this. Could you try to SSH to the control-plane server and check the
kops-configuration
andkubelet
service logs withjournalctl
. Also please check theetcd
logs in/var/log
. These should give you a general idea about why the cluster fails to start.
Thank you! I have checked and in the journalctl I have been able to see that the kops-configuration service is giving the following error:
Aug 21 15:45:45 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: [395B blob data]
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: == Downloaded https://artifacts.k8s.io/binaries/kops/1.27.0/linux/amd64/nodeup (SHA256 = a647162753326c69ab48df390b71bd6c8a4eec2616ef8d269021b486212ac36d) ==
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: Running nodeup
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: nodeup version 1.27.0 (git-v1.27.0)
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.735523 968 install.go:178] Built service manifest "kops-configuration.service"
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: [Unit]
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: Description=Run kOps bootstrap (nodeup)
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: Documentation=https://github.com/kubernetes/kops
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: [Service]
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: EnvironmentFile=/etc/sysconfig/kops-configuration
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: EnvironmentFile=/etc/environment
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: ExecStart=/opt/kops/bin/nodeup --conf=/opt/kops/conf/kube_env.yaml --v=8
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: Type=oneshot
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: [Install]
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: WantedBy=multi-user.target
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.735655 968 topological_sort.go:79] Dependencies:
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.735663 968 topological_sort.go:81] InstallFile//etc/sysconfig/kops-configuration: []
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.735676 968 topological_sort.go:81] InstallService/kops-configuration.service: [InstallFile//etc/sysconfig/kops-configuration]
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.747601 968 executor.go:111] Tasks: 0 done / 2 total; 1 can run
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.747890 968 executor.go:192] Executing task "InstallFile//etc/sysconfig/kops-configuration": File: "/etc/sysconfig/kops-configuration"
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749003 968 files.go:57] Writing file "/etc/sysconfig/kops-configuration"
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749165 968 files.go:113] Changing file mode for "/etc/sysconfig/kops-configuration" to -rw-r--r--
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749223 968 executor.go:111] Tasks: 1 done / 2 total; 1 can run
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749275 968 executor.go:192] Executing task "InstallService/kops-configuration.service": Service: kops-configuration.service
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749556 968 changes.go:81] Field changed "Service" actual="{kops-configuration.service <nil> 0xc0004e3218 <nil> <nil> <nil>}" expected="{kops-configuration.service 0xc0005417a0 0xc000433a47 0xc>
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749808 968 files.go:57] Writing file "/lib/systemd/system/kops-configuration.service"
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.749955 968 files.go:113] Changing file mode for "/lib/systemd/system/kops-configuration.service" to -rw-r--r--
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:46.750012 968 service.go:332] Reloading systemd configuration
Aug 21 15:45:46 control-plane-fsn1-8bc92ff1dbffff8 systemd[1]: Reloading.
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 cloud-init[818]: I0821 15:45:47.193433 968 service.go:395] Restarting service "kops-configuration.service"
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 systemd[1]: Starting Run kOps bootstrap (nodeup)...
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: nodeup version 1.27.0 (git-v1.27.0)
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: I0821 15:45:47.257370 1004 s3context.go:338] product_uuid is "5a95058a-bdcb-4df8-aee5-aee7d59b1fde", assuming not running on EC2
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: I0821 15:45:47.257399 1004 s3context.go:175] defaulting region to "us-east-1"
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: 2023/08/21 15:45:47 WARN: failed to get session token, falling back to IMDSv1: 404 Not Found: Not Found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: status code: 404, request id:
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EC2MetadataError: failed to make EC2Metadata request
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: Not Found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: status code: 404, request id:
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: I0821 15:45:47.263851 1004 s3context.go:192] unable to get bucket location from region "us-east-1"; scanning all regions: NoCredentialProviders: no valid providers in chain
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: SharedCredsLoad: failed to load profile, .
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: EC2RoleRequestError: no EC2 instance role found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EC2MetadataError: failed to make EC2Metadata request
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: Not Found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: status code: 404, request id:
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: 2023/08/21 15:45:47 WARN: failed to get session token, falling back to IMDSv1: 404 Not Found: Not Found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: status code: 404, request id:
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EC2MetadataError: failed to make EC2Metadata request
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: Not Found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: status code: 404, request id:
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: W0821 15:45:47.265239 1004 main.go:133] got error running nodeup (will retry in 30s): error loading Cluster "s3://XXXXXXXXXXXXXXXXX/test.example.k8s.local/cluster-completed.spec": Unable to list AWS regions: NoCr>
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: SharedCredsLoad: failed to load profile, .
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: EC2RoleRequestError: no EC2 instance role found
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: caused by: EC2MetadataError: failed to make EC2Metadata request
Aug 21 15:45:47 control-plane-fsn1-8bc92ff1dbffff8 nodeup[1004]: Not Found
From what I can see, the problem lies in the access credentials to the S3 bucket, but when I saw it, I tried to generate the cluster with the variables:
export S3_ACCESS_KEY_ID=************
export S3_SECRET_ACCESS_KEY=************
export S3_REGION=eu-west-3
And the same problem keeps happening. I have checked the file /etc/sysconfig/kops-configuration
and it only appears:
HCLOUD_TOKEN=************
And the same problem keeps happening. I have checked the file /etc/sysconfig/kops-configuration and it only appears:
You will only get these on the control-plane nodes. Are you starting from scratch with the cluster, or trying to fix the existing one? Would recommend to create a new cluster and check the env vars before this.
You will only get these on the control-plane nodes. Are you starting from scratch with the cluster, or trying to fix the existing one? Would recommend to create a new cluster and check the env vars before this.
I have created a new cluster
I don't see the S3_ENDPOINT
env var:
https://docs.aws.amazon.com/general/latest/gr/s3.html
Thanks mate! I have managed to get the cluster up. But on node trying to download the kubelet is getting a 403 error.
Aug 21 17:13:39 nodes-fsn1-7da1973e6fe7ac1a nodeup[877]: I0821 17:13:39.646882 877 http.go:82] Downloading "https://dl.k8s.io/release/v1.26.7/bin/ linux/amd64/kubelet"
Aug 21 17:13:39 nodes-fsn1-7da1973e6fe7ac1a nodeup[877]: W0821 17:13:39.786156 877 assetstore.go:251] error downloading url "https://dl.k8s.io/release/v1.26.7/ bin/linux/amd64/kubelet": error response from "https://dl.k8s.io/release/v1.26.7/bin/linux/amd64/kubelet": HTTP 403
I have tried to do it myself with wget and I get a 403 from the server, but locally with no problem.
It is probably a problem with the mirror you are hitting from the node. Maybe try with k8s 1.27.4.
I think is blocking the request because of the IP it came from. I just tried with a new cluster with version 1.27.4 and the same thing happens, error 403.
What is the output for the following commands? It may be something on Hetzner's side or something related to the dl.k8s.io mirror.
dig dl.k8s.io
tracepath dl.k8s.io
dig dl.k8s.io
; <<>> DiG 9.16.1-Ubuntu <<>> dl.k8s.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61556
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;dl.k8s.io. IN A
;; ANSWER SECTION:
dl.k8s.io. 2166 IN CNAME redirect.k8s.io.
redirect.k8s.io. 2142 IN A 34.107.204.206
;; Query time: 4 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Mon Aug 21 19:45:33 UTC 2023
;; MSG SIZE rcvd: 77
tracepath dl.k8s.io
1?: [LOCALHOST] pmtu 1500
1: _gateway 5.346ms
1: _gateway 3.322ms
2: 11019.your-cloud.host 1.626ms asymm 1
3: no reply
4: spine4.cloud2.fsn1.hetzner.com 2.309ms asymm 3
5: spine1.cloud2.fsn1.hetzner.com 55.037ms asymm 4
6: core24.fsn1.hetzner.com 1.169ms asymm 5
7: core1.fra.hetzner.com 5.862ms asymm 6
8: 72.14.218.94 8.354ms
9: 209.85.142.69 6.329ms asymm 11
10: 142.250.46.251 7.072ms asymm 11
11: 206.204.107.34.bc.googleusercontent.com 5.977ms reached
Resume: pmtu 1500 hops 11 back 13
Hi All, I have the same issue as Kespineira. I am testing in Ashburn(USA) and Nuremberg(EU)zones and some of the nodes are not able to download the Kubelet with "HTTP 403" error and can`t join the cluster. I am not sure that the issue comes from Kops or Hetzner API because all the servers, volumes, LB, etc are created.
error downloading url "https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubelet": error response from "https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubelet": HTTP 403
- Kops version: 1.27.0
- K8S version: 1.26.7 and 1.27.4
- Hetzner
- Commands
envs:
export KOPS_STATE_STORE=s3://s3-bucket
export HCLOUD_TOKEN=token
export S3_ENDPOINT=s3.region.amazonaws.com
export S3_REGION=aws-region
export S3_ACCESS_KEY_ID=access-key
export S3_SECRET_ACCESS_KEY=secret-access-key
kops-1-27-0 create cluster --name=hetzner.test.k8s.local \
--ssh-public-key=/home/id_rsa.pub --cloud=hetzner --zones=ash \ --image=ubuntu-22.04 --networking=calico --network-cidr=10.10.0.0/16 --state=s3://hetzner-k8s-name
--node-count 3
--master-count 1
--master-size cpx21
--node-size cpx51
--dns=none
--topology private
--kubernetes-version 1.27.4
root@control-plane-nbg1-2-:/# dig dl.k8s.io
; <<>> DiG 9.16.1-Ubuntu <<>> dl.k8s.io ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19310 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;dl.k8s.io. IN A
;; ANSWER SECTION: dl.k8s.io. 182 IN CNAME redirect.k8s.io. redirect.k8s.io. 3378 IN A 34.107.204.206
;; Query time: 0 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Tue Aug 22 09:34:11 UTC 2023 ;; MSG SIZE rcvd: 77
root@control-plane-nbg1-2-:/# tracepath dl.k8s.io 1?: [LOCALHOST] pmtu 1500 1: _gateway 2.128ms 1: _gateway 1.145ms 2: 17427.your-cloud.host 1.060ms asymm 1 3: no reply 4: spine4.cloud1.nbg1.hetzner.com 57.696ms asymm 3 5: no reply 6: core12.nbg1.hetzner.com 1.425ms asymm 5 7: core5.fra.hetzner.com 3.841ms asymm 6 8: 72.14.218.176 3.806ms asymm 12 9: 209.85.142.109 4.667ms asymm 10 10: 142.250.234.17 4.614ms asymm 12 11: 206.204.107.34.bc.googleusercontent.com 3.567ms reached Resume: pmtu 1500 hops 11 back 8
@kespineira @hakman Did any of you try to open a ticket with Hetzner? They may know more about the reason k8s CDN blocks Hetzner subnets.
Hi @systemblox and @hakman! Finally I have opened a ticket to Hetzner and they have indicated that some GeoIP databases locate their IPs in Iran. The solution they have given me:
If this leads to issues for you, please create a Snapshot of the server with the incorrect IP location. Then create a new server with this Snapshot. You can then delete the server with the incorrect IP location.
Technically, creating a new cluster should give you some other random IP. I don't think there's anything important there yet. 😄
Yes but, try to reassign the IPs you have already used. So no matter how much you delete the cluster and create a new one, some of the nodes receive that IP that gives you problems, or at least that's what has happened to me several times. :confounded:
kOps has a feature that lets you configure mirrors for container images and files: https://pkg.go.dev/k8s.io/kops/pkg/apis/kops#AssetsSpec
If you configure fileRepository
and containerRegistry
you should be able to copy all the things you need there using:
kops get assets --copy
@kespineira yeah, same on my end. @hakman I`ll try it, thank you.
Hi @hakman, I have tested it with other IP addresses and it works fine. Thanks a lot.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.