kops icon indicating copy to clipboard operation
kops copied to clipboard

Intermittent Error syncing load balancer / not authorized to perform: ModifyLoadBalancerAttributes

Open gekart opened this issue 2 years ago • 5 comments

/kind bug

1. What kops version are you running? The command kops version, will display this information. Client version: 1.28.0 (git-v1.28.0) Going back step wise to 1.25: same issue.

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.27.6 Going back step wise from 1.28.2 to 1.25.x: same issue.

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue? Create cluster, create deployment, expose service type LoadBalancer

5. What happened after the commands executed? The ELB does not get created immediately (stays in pending for 3-5 mins), then gets provisioned.

Service event log:

Events:
  Type     Reason                  Age                  From                Message
  ----     ------                  ----                 ----                -------
  Warning  SyncLoadBalancerFailed  2m50s                service-controller  Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-05de2ad533153e9a0 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/ac1c117b0d5324e07940053f04113c6d because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: e73589ad-142d-490a-bae0-ab95fcf47800"
  Warning  SyncLoadBalancerFailed  2m44s                service-controller  Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-05de2ad533153e9a0 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/ac1c117b0d5324e07940053f04113c6d because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 51a79987-101c-49d2-9a9e-db2b2f2af781"
  Warning  SyncLoadBalancerFailed  2m34s                service-controller  Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-05de2ad533153e9a0 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/ac1c117b0d5324e07940053f04113c6d because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 8f7cf2f1-da12-462a-b2e3-39c9cc8403fe"
  Warning  SyncLoadBalancerFailed  2m13s                service-controller  Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-05de2ad533153e9a0 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/ac1c117b0d5324e07940053f04113c6d because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 65c32617-d75c-49fc-9ea7-4103f3e584dd"
  Warning  SyncLoadBalancerFailed  93s                  service-controller  Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-05de2ad533153e9a0 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/ac1c117b0d5324e07940053f04113c6d because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 46e9b0b9-a4bd-4bc8-8d45-74ced130fc50"
  Normal   EnsuringLoadBalancer    13s (x6 over 2m52s)  service-controller  Ensuring load balancer
  Normal   EnsuredLoadBalancer     12s                  service-controller  Ensured load balancer

6. What did you expect to happen? ELB gets provisioned within a few seconds.

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2023-10-03T08:00:04Z"
  name: arku.k8s.local
spec:
  api:
    loadBalancer:
      class: Network
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://arch-kube/arku.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: control-plane-eu-central-1a
      name: a
    manager:
      backupRetentionDays: 90
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: control-plane-eu-central-1a
      name: a
    manager:
      backupRetentionDays: 90
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeProxy:
    enabled: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.27.6
  networkCIDR: 172.20.0.0/16
  networking:
    cilium:
      enableNodePort: true
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 172.20.0.0/18
    name: eu-central-1a
    type: Public
    zone: eu-central-1a
  - cidr: 172.20.64.0/18
    name: eu-central-1b
    type: Public
    zone: eu-central-1b
  - cidr: 172.20.128.0/18
    name: eu-central-1c
    type: Public
    zone: eu-central-1c
  topology:
    dns:
      type: Private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-10-03T08:00:04Z"
  labels:
    kops.k8s.io/cluster: arku.k8s.local
  name: control-plane-eu-central-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919
  machineType: t3.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - eu-central-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-10-03T08:00:04Z"
  labels:
    kops.k8s.io/cluster: arku.k8s.local
  name: nodes-eu-central-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919
  machineType: r5.large
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-central-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-10-03T08:00:04Z"
  labels:
    kops.k8s.io/cluster: arku.k8s.local
  name: nodes-eu-central-1b
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919
  machineType: r5.large
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-central-1b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2023-10-03T08:00:04Z"
  labels:
    kops.k8s.io/cluster: arku.k8s.local
  name: nodes-eu-central-1c
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919
  machineType: r5.large
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-central-1c

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know? Provisioned 100s of clusters in the same AWS account for testing purposes. 1.19.x works flawlessly, issue started when moving to k8s 1.25.x IIRC.

Adding AmazonEC2FullAccess to masters role fixes the issue...

gekart avatar Oct 03 '23 08:10 gekart

@gekart Could you provide the relevant logs from AWS CCM?

hakman avatar Oct 03 '23 09:10 hakman

@hakman

k logs aws-cloud-controller-manager-jgkt7 -n kube-system 
I1003 09:58:04.952376       1 serving.go:348] Generated self-signed cert in-memory
I1003 09:58:05.746961       1 serving.go:348] Generated self-signed cert in-memory
W1003 09:58:05.747018       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1003 09:58:06.806701       1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
I1003 09:58:06.809298       1 aws.go:681] Loading region from metadata service
I1003 09:58:06.820214       1 aws.go:1341] Building AWS cloudprovider
I1003 09:58:06.820776       1 aws.go:681] Loading region from metadata service
I1003 09:58:07.001819       1 tags.go:77] AWS cloud filtering on ClusterID: arku.k8s.local
I1003 09:58:07.001841       1 aws.go:1431] The following IP families will be added to nodes: [ipv4]
I1003 09:58:07.001871       1 controllermanager.go:167] Version: v1.27.2
I1003 09:58:07.006523       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1696327084\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1696327084\" (2023-10-03 08:58:04 +0000 UTC to 2024-10-02 08:58:04 +0000 UTC (now=2023-10-03 09:58:07.00648319 +0000 UTC))"
I1003 09:58:07.006997       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1696327086\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1696327086\" (2023-10-03 08:58:05 +0000 UTC to 2024-10-02 08:58:05 +0000 UTC (now=2023-10-03 09:58:07.006978013 +0000 UTC))"
I1003 09:58:07.007026       1 secure_serving.go:210] Serving securely on [::]:10258
I1003 09:58:07.007293       1 leaderelection.go:245] attempting to acquire leader lease kube-system/cloud-controller-manager...
I1003 09:58:07.007589       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1003 09:58:07.007619       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
I1003 09:58:07.007647       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1003 09:58:07.007717       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1003 09:58:07.007725       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1003 09:58:07.007740       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1003 09:58:07.007746       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1003 09:58:07.047701       1 leaderelection.go:255] successfully acquired lease kube-system/cloud-controller-manager
I1003 09:58:07.051738       1 event.go:307] "Event occurred" object="kube-system/cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="i-0aca4494e10347414_e6faee8c-5e21-461c-ae7e-19dc0bd0db81 became leader"
I1003 09:58:07.096355       1 aws.go:861] Setting up informers for Cloud
I1003 09:58:07.096402       1 controllermanager.go:317] Starting "route"
I1003 09:58:07.096412       1 core.go:104] Will not configure cloud provider routes, --configure-cloud-routes: false
W1003 09:58:07.096422       1 controllermanager.go:324] Skipping "route"
W1003 09:58:07.096427       1 controllermanager.go:313] "tagging" is disabled
I1003 09:58:07.096434       1 controllermanager.go:317] Starting "cloud-node"
I1003 09:58:07.100131       1 controllermanager.go:336] Started "cloud-node"
I1003 09:58:07.100280       1 controllermanager.go:317] Starting "cloud-node-lifecycle"
I1003 09:58:07.100457       1 node_controller.go:161] Sending events to api server.
I1003 09:58:07.100520       1 node_controller.go:170] Waiting for informer caches to sync
I1003 09:58:07.103664       1 controllermanager.go:336] Started "cloud-node-lifecycle"
I1003 09:58:07.103685       1 controllermanager.go:317] Starting "service"
I1003 09:58:07.103814       1 node_lifecycle_controller.go:113] Sending events to api server
I1003 09:58:07.116575       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1003 09:58:07.116630       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I1003 09:58:07.117657       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1003 09:58:07.124163       1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"apiserver-aggregator-ca\" [] issuer=\"<self>\" (2023-10-01 09:53:55 +0000 UTC to 2033-09-30 09:53:55 +0000 UTC (now=2023-10-03 09:58:07.124114098 +0000 UTC))"
I1003 09:58:07.127055       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1696327084\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1696327084\" (2023-10-03 08:58:04 +0000 UTC to 2024-10-02 08:58:04 +0000 UTC (now=2023-10-03 09:58:07.127020245 +0000 UTC))"
I1003 09:58:07.127658       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1696327086\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1696327086\" (2023-10-03 08:58:05 +0000 UTC to 2024-10-02 08:58:05 +0000 UTC (now=2023-10-03 09:58:07.127630296 +0000 UTC))"
I1003 09:58:07.127831       1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubernetes-ca\" [] issuer=\"<self>\" (2023-10-01 09:53:55 +0000 UTC to 2033-09-30 09:53:55 +0000 UTC (now=2023-10-03 09:58:07.127806286 +0000 UTC))"
I1003 09:58:07.127863       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"apiserver-aggregator-ca\" [] issuer=\"<self>\" (2023-10-01 09:53:55 +0000 UTC to 2033-09-30 09:53:55 +0000 UTC (now=2023-10-03 09:58:07.127843629 +0000 UTC))"
I1003 09:58:07.134075       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1696327084\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1696327084\" (2023-10-03 08:58:04 +0000 UTC to 2024-10-02 08:58:04 +0000 UTC (now=2023-10-03 09:58:07.134040376 +0000 UTC))"
I1003 09:58:07.134680       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1696327086\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1696327086\" (2023-10-03 08:58:05 +0000 UTC to 2024-10-02 08:58:05 +0000 UTC (now=2023-10-03 09:58:07.134648381 +0000 UTC))"
I1003 09:58:07.136693       1 controllermanager.go:336] Started "service"
I1003 09:58:07.136796       1 controller.go:229] Starting service controller
I1003 09:58:07.136813       1 shared_informer.go:311] Waiting for caches to sync for service
I1003 09:58:07.201859       1 node_controller.go:427] Initializing node i-0aca4494e10347414 with cloud provider
I1003 09:58:07.202929       1 node_controller.go:263] Update 1 nodes status took 59.549µs.
I1003 09:58:07.237523       1 shared_informer.go:318] Caches are synced for service
I1003 09:58:07.237584       1 controller.go:695] Syncing backends for all LB services.
I1003 09:58:07.237600       1 controller.go:699] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I1003 09:58:07.495218       1 node_controller.go:536] Adding node label from cloud provider: beta.kubernetes.io/instance-type=t3.medium
I1003 09:58:07.495237       1 node_controller.go:537] Adding node label from cloud provider: node.kubernetes.io/instance-type=t3.medium
I1003 09:58:07.495243       1 node_controller.go:548] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-central-1a
I1003 09:58:07.495248       1 node_controller.go:549] Adding node label from cloud provider: topology.kubernetes.io/zone=eu-central-1a
I1003 09:58:07.495254       1 node_controller.go:559] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-central-1
I1003 09:58:07.495260       1 node_controller.go:560] Adding node label from cloud provider: topology.kubernetes.io/region=eu-central-1
I1003 09:58:07.530046       1 node_controller.go:496] Successfully initialized node i-0aca4494e10347414 with cloud provider
I1003 09:58:07.530203       1 event.go:307] "Event occurred" object="i-0aca4494e10347414" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I1003 09:58:38.074505       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 09:58:38.074562       1 controller.go:885] Adding finalizer to service default/nginx
I1003 09:58:38.076689       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 09:58:38.116839       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 09:58:38.117654       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I1003 09:58:39.029978       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be []
I1003 09:58:39.030269       1 aws.go:3210] Adding security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 09:58:39.350428       1 aws_loadbalancer.go:1017] Creating load balancer for default/nginx with name: a117a0567880b48d8b3a1b8de701020c
I1003 09:58:40.145039       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 09:58:40.153058       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 6f894a37-40c5-4139-92b8-05db37f51ce0"
I1003 09:58:40.154518       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: 6f894a37-40c5-4139-92b8-05db37f51ce0\""
I1003 09:58:40.789258       1 controller.go:695] Syncing backends for all LB services.
I1003 09:58:40.789362       1 controller.go:774] Updating backends for load balancer default/nginx with node set: map[i-0808ef29413d76123:{}]
W1003 09:58:40.789451       1 instances.go:110] node "i-0808ef29413d76123" did not have ProviderID set
I1003 09:58:40.789797       1 node_controller.go:427] Initializing node i-0808ef29413d76123 with cloud provider
I1003 09:58:41.072332       1 controller.go:699] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I1003 09:58:41.072421       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="UpdatedLoadBalancer" message="Updated load balancer with new hosts"
I1003 09:58:41.081429       1 controller.go:695] Syncing backends for all LB services.
I1003 09:58:41.084901       1 controller.go:774] Updating backends for load balancer default/nginx with node set: map[i-0808ef29413d76123:{} i-08d827fa9495e54b6:{}]
W1003 09:58:41.085144       1 instances.go:110] node "i-0808ef29413d76123" did not have ProviderID set
W1003 09:58:41.085403       1 instances.go:110] node "i-08d827fa9495e54b6" did not have ProviderID set
I1003 09:58:41.107493       1 node_controller.go:536] Adding node label from cloud provider: beta.kubernetes.io/instance-type=r5.large
I1003 09:58:41.107520       1 node_controller.go:537] Adding node label from cloud provider: node.kubernetes.io/instance-type=r5.large
I1003 09:58:41.107526       1 node_controller.go:548] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-central-1c
I1003 09:58:41.107531       1 node_controller.go:549] Adding node label from cloud provider: topology.kubernetes.io/zone=eu-central-1c
I1003 09:58:41.107536       1 node_controller.go:559] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-central-1
I1003 09:58:41.107542       1 node_controller.go:560] Adding node label from cloud provider: topology.kubernetes.io/region=eu-central-1
I1003 09:58:41.206425       1 node_controller.go:496] Successfully initialized node i-0808ef29413d76123 with cloud provider
I1003 09:58:41.206495       1 node_controller.go:427] Initializing node i-08d827fa9495e54b6 with cloud provider
I1003 09:58:41.207383       1 event.go:307] "Event occurred" object="i-0808ef29413d76123" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I1003 09:58:41.305777       1 controller.go:699] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I1003 09:58:41.308684       1 controller.go:695] Syncing backends for all LB services.
I1003 09:58:41.309003       1 controller.go:774] Updating backends for load balancer default/nginx with node set: map[i-0808ef29413d76123:{} i-08d827fa9495e54b6:{} i-09308fd07644ad49b:{}]
W1003 09:58:41.309172       1 instances.go:110] node "i-08d827fa9495e54b6" did not have ProviderID set
W1003 09:58:41.309296       1 instances.go:110] node "i-09308fd07644ad49b" did not have ProviderID set
I1003 09:58:41.309440       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="UpdatedLoadBalancer" message="Updated load balancer with new hosts"
I1003 09:58:41.379829       1 controller.go:699] Successfully updated 1 out of 1 load balancers to direct traffic to the updated set of nodes
I1003 09:58:41.380093       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="UpdatedLoadBalancer" message="Updated load balancer with new hosts"
I1003 09:58:41.543599       1 node_controller.go:536] Adding node label from cloud provider: beta.kubernetes.io/instance-type=r5.large
I1003 09:58:41.543780       1 node_controller.go:537] Adding node label from cloud provider: node.kubernetes.io/instance-type=r5.large
I1003 09:58:41.544100       1 node_controller.go:548] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-central-1a
I1003 09:58:41.544196       1 node_controller.go:549] Adding node label from cloud provider: topology.kubernetes.io/zone=eu-central-1a
I1003 09:58:41.544451       1 node_controller.go:559] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-central-1
I1003 09:58:41.544489       1 node_controller.go:560] Adding node label from cloud provider: topology.kubernetes.io/region=eu-central-1
I1003 09:58:41.593338       1 node_controller.go:496] Successfully initialized node i-08d827fa9495e54b6 with cloud provider
I1003 09:58:41.593465       1 node_controller.go:427] Initializing node i-09308fd07644ad49b with cloud provider
I1003 09:58:41.594101       1 event.go:307] "Event occurred" object="i-08d827fa9495e54b6" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I1003 09:58:41.893274       1 node_controller.go:536] Adding node label from cloud provider: beta.kubernetes.io/instance-type=r5.large
I1003 09:58:41.894019       1 node_controller.go:537] Adding node label from cloud provider: node.kubernetes.io/instance-type=r5.large
I1003 09:58:41.894107       1 node_controller.go:548] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=eu-central-1b
I1003 09:58:41.894585       1 node_controller.go:549] Adding node label from cloud provider: topology.kubernetes.io/zone=eu-central-1b
I1003 09:58:41.894671       1 node_controller.go:559] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=eu-central-1
I1003 09:58:41.894738       1 node_controller.go:560] Adding node label from cloud provider: topology.kubernetes.io/region=eu-central-1
I1003 09:58:41.925067       1 node_controller.go:496] Successfully initialized node i-09308fd07644ad49b with cloud provider
I1003 09:58:41.927473       1 event.go:307] "Event occurred" object="i-09308fd07644ad49b" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="Synced" message="Node synced successfully"
I1003 09:58:45.154049       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 09:58:45.154351       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 09:58:45.155259       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 09:58:45.453779       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 09:58:45.517178       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 09:58:45.562404       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 09:58:45.570273       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: e13eb917-ea5d-4be3-932f-5e7793e78525"
I1003 09:58:45.570936       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: e13eb917-ea5d-4be3-932f-5e7793e78525\""
I1003 09:58:55.571595       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 09:58:55.571670       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 09:58:55.572523       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 09:58:55.955897       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 09:58:56.008721       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 09:58:56.073408       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 09:58:56.081153       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 0ebc3941-096c-4b3e-9674-d7704e1980df"
I1003 09:58:56.082034       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: 0ebc3941-096c-4b3e-9674-d7704e1980df\""
I1003 09:59:16.082154       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 09:59:16.082724       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 09:59:16.082812       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 09:59:16.253923       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 09:59:16.319077       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 09:59:16.355719       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 09:59:16.361645       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 4cf21285-6552-4c0f-b6c4-fc514d4589a6"
I1003 09:59:16.361956       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: 4cf21285-6552-4c0f-b6c4-fc514d4589a6\""
I1003 09:59:56.362386       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 09:59:56.362656       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 09:59:56.363031       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 09:59:56.760078       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 09:59:56.839172       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 09:59:56.884799       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 09:59:56.893244       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 6b500927-5cb0-41c8-a041-77471dd12bf3"
I1003 09:59:56.893486       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: 6b500927-5cb0-41c8-a041-77471dd12bf3\""
I1003 10:01:16.893891       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 10:01:16.894170       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 10:01:16.894079       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 10:01:17.143934       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 10:01:17.210700       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 10:01:17.254483       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
E1003 10:01:17.260754       1 controller.go:291] error processing service default/nginx (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: "AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\n\tstatus code: 403, request id: 26ddaca4-0736-42c5-9e0d-39459b6edfe0"
I1003 10:01:17.261087       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::435909871689:assumed-role/masters.arku.k8s.local/i-0aca4494e10347414 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:435909871689:loadbalancer/a117a0567880b48d8b3a1b8de701020c because no identity-based policy allows the elasticloadbalancing:ModifyLoadBalancerAttributes action\\n\\tstatus code: 403, request id: 26ddaca4-0736-42c5-9e0d-39459b6edfe0\""
I1003 10:03:07.662626       1 node_controller.go:263] Update 4 nodes status took 458.639193ms.
I1003 10:03:57.261924       1 controller.go:388] Ensuring load balancer for service default/nginx
I1003 10:03:57.262021       1 aws.go:4006] EnsureLoadBalancer(arku.k8s.local, default, nginx, eu-central-1, , [{ TCP <nil> 80 {0 80 } 31689}], map[])
I1003 10:03:57.262392       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1003 10:03:57.685586       1 aws.go:3179] Existing security group ingress: sg-033f1c288ceaa74be [{
  FromPort: 80,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 80
} {
  FromPort: 3,
  IpProtocol: "icmp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  ToPort: 4
}]
I1003 10:03:57.764800       1 aws_loadbalancer.go:1193] Creating additional load balancer tags for a117a0567880b48d8b3a1b8de701020c
I1003 10:03:57.803645       1 aws_loadbalancer.go:1220] Updating load-balancer attributes for "a117a0567880b48d8b3a1b8de701020c"
I1003 10:03:58.208209       1 aws.go:4628] Adding rule for traffic from the load balancer (sg-033f1c288ceaa74be) to instances (sg-097d0d9611f12889e)
I1003 10:03:58.265546       1 aws.go:3254] Existing security group ingress: sg-097d0d9611f12889e [{
  IpProtocol: "-1",
  UserIdGroupPairs: [{
      GroupId: "sg-00ad0c587f7116ba8",
      UserId: "435909871689"
    },{
      GroupId: "sg-097d0d9611f12889e",
      UserId: "435909871689"
    }]
} {
  FromPort: 22,
  IpProtocol: "tcp",
  IpRanges: [{
      CidrIp: "0.0.0.0/0"
    }],
  Ipv6Ranges: [{
      CidrIpv6: "::/0"
    }],
  ToPort: 22
}]
I1003 10:03:58.265611       1 aws.go:3151] Comparing sg-033f1c288ceaa74be to sg-00ad0c587f7116ba8
I1003 10:03:58.265617       1 aws.go:3151] Comparing sg-033f1c288ceaa74be to sg-097d0d9611f12889e
I1003 10:03:58.265622       1 aws.go:3282] Adding security group ingress: sg-097d0d9611f12889e [{
  IpProtocol: "-1",
  UserIdGroupPairs: [{
      GroupId: "sg-033f1c288ceaa74be"
    }]
}]
I1003 10:03:58.739478       1 aws_loadbalancer.go:1469] Instances added to load-balancer a117a0567880b48d8b3a1b8de701020c
I1003 10:03:58.739527       1 aws.go:4394] Loadbalancer a117a0567880b48d8b3a1b8de701020c (default/nginx) has DNS name a117a0567880b48d8b3a1b8de701020c-11631799.eu-central-1.elb.amazonaws.com
I1003 10:03:58.739572       1 controller.go:926] Patching status for service default/nginx
I1003 10:03:58.740117       1 event.go:307] "Event occurred" object="default/nginx" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

This time, I was a bit quick in creating a service after provisioning a new cluster (before worker nodes were up). Disregard any "no node available" messages.

gekart avatar Oct 03 '23 10:10 gekart

I should also add, that the ELB is added immediately in AWS, but is kept in pending in k8s.

gekart avatar Oct 03 '23 10:10 gekart

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 29 '24 14:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 28 '24 15:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 29 '24 15:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 29 '24 15:03 k8s-ci-robot