vsphere-csi-driver
vsphere-csi-driver copied to clipboard
CSI fails with "The object or item referred to could not be found."
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line: /kind bug /kind feature
What happened: while attaching/detaching/deleting a CNS Volume vSphere-CSI is unable to do so correctly and fails with
2021-08-05T19:06:21.121Z INFO vanilla/controller.go:857 ControllerUnpublishVolume: called with args {VolumeId:fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5 NodeId:smops-1-md-0-7f4dd9d98f-kmglw.reftmdc.bn.schiff.telekom.de Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.265Z DEBUG node/manager.go:180 Renewing virtual machine VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]] with nodeUUID "42028e7e-0b17-9f67-84e4-db052edbad68" {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.274Z DEBUG node/manager.go:187 VM VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]] was successfully renewed with nodeUUID "42028e7e-0b17-9f67-84e4-db052edbad68" {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.274Z DEBUG common/vsphereutil.go:449 vSphere CSI driver is detaching volume: fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5 from node vm: {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.480Z INFO volume/manager.go:438 VolumeID: "fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5", not found. Checking whether the volume is already detached {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.489Z INFO volume/util.go:58 Found diskUUID 6000C29d-18eb-3865-5aec-efb8d2709dd7 for volume fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5 on vm VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]] {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
2021-08-05T19:06:21.489Z ERROR volume/manager.go:451 failed to detach cns volume:"fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5" from node vm: VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]]. err: ServerFaultCode: Received SOAP response fault from [<cs p:00007f71f8008570, TCP:localhost:443>]: retrieveVStorageObject
The object or item referred to could not be found. {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
sigs.k8s.io/vsphere-csi-driver/pkg/common/cns-lib/volume.(*defaultManager).DetachVolume.func1
/build/pkg/common/cns-lib/volume/manager.go:451
sigs.k8s.io/vsphere-csi-driver/pkg/common/cns-lib/volume.(*defaultManager).DetachVolume
/build/pkg/common/cns-lib/volume/manager.go:496
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/common.DetachVolumeUtil
/build/pkg/csi/service/common/vsphereutil.go:450
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume.func1
/build/pkg/csi/service/vanilla/controller.go:928
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume
/build/pkg/csi/service/vanilla/controller.go:937
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler.func1
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5200
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).controllerUnpublishVolume
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:141
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:88
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer.func1
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:178
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:218
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:177
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi.(*StoragePlugin).injectContext
/go/pkg/mod/github.com/rexray/[email protected]/middleware.go:231
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/utils.ChainUnaryServer.func2
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:106
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5202
google.golang.org/grpc.(*Server).processUnaryRPC
/go/pkg/mod/google.golang.org/[email protected]/server.go:1024
google.golang.org/grpc.(*Server).handleStream
/go/pkg/mod/google.golang.org/[email protected]/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
/go/pkg/mod/google.golang.org/[email protected]/server.go:722
2021-08-05T19:06:21.490Z ERROR common/vsphereutil.go:452 failed to detach disk fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5 with err failed to detach cns volume:"fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5" from node vm: VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]]. err: ServerFaultCode: Received SOAP response fault from [<cs p:00007f71f8008570, TCP:localhost:443>]: retrieveVStorageObject
The object or item referred to could not be found. {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/common.DetachVolumeUtil
/build/pkg/csi/service/common/vsphereutil.go:452
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume.func1
/build/pkg/csi/service/vanilla/controller.go:928
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume
/build/pkg/csi/service/vanilla/controller.go:937
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler.func1
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5200
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).controllerUnpublishVolume
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:141
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:88
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer.func1
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:178
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:218
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:177
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi.(*StoragePlugin).injectContext
/go/pkg/mod/github.com/rexray/[email protected]/middleware.go:231
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/utils.ChainUnaryServer.func2
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:106
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5202
google.golang.org/grpc.(*Server).processUnaryRPC
/go/pkg/mod/google.golang.org/[email protected]/server.go:1024
google.golang.org/grpc.(*Server).handleStream
/go/pkg/mod/google.golang.org/[email protected]/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
/go/pkg/mod/google.golang.org/[email protected]/server.go:722
2021-08-05T19:06:21.490Z ERROR vanilla/controller.go:931 failed to detach disk: "fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5" from node: "smops-1-md-0-7f4dd9d98f-kmglw.reftmdc.bn.schiff.telekom.de" err failed to detach cns volume:"fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5" from node vm: VirtualMachine:vm-86435 [VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo, UUID: 42028e7e-0b17-9f67-84e4-db052edbad68, Datacenter: Datacenter [Datacenter: Datacenter:datacenter-17499, VirtualCenterHost: m4fxvmvsvvt1.oam-sa.mn.de.tmo]]. err: ServerFaultCode: Received SOAP response fault from [<cs p:00007f71f8008570, TCP:localhost:443>]: retrieveVStorageObject
The object or item referred to could not be found. {"TraceId": "5480013c-6351-4282-8511-f4a0eb31b96f"}
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume.func1
/build/pkg/csi/service/vanilla/controller.go:931
sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).ControllerUnpublishVolume
/build/pkg/csi/service/vanilla/controller.go:937
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler.func1
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5200
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).controllerUnpublishVolume
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:141
github.com/rexray/gocsi/middleware/serialvolume.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/serialvolume/serial_volume_locker.go:88
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer.func1
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:178
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handle
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:218
github.com/rexray/gocsi/middleware/specvalidator.(*interceptor).handleServer
/go/pkg/mod/github.com/rexray/[email protected]/middleware/specvalidator/spec_validator.go:177
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi.(*StoragePlugin).injectContext
/go/pkg/mod/github.com/rexray/[email protected]/middleware.go:231
github.com/rexray/gocsi/utils.ChainUnaryServer.func2.1.1
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:99
github.com/rexray/gocsi/utils.ChainUnaryServer.func2
/go/pkg/mod/github.com/rexray/[email protected]/utils/utils_middleware.go:106
github.com/container-storage-interface/spec/lib/go/csi._Controller_ControllerUnpublishVolume_Handler
/go/pkg/mod/github.com/container-storage-interface/[email protected]/lib/go/csi/csi.pb.go:5202
google.golang.org/grpc.(*Server).processUnaryRPC
/go/pkg/mod/google.golang.org/[email protected]/server.go:1024
google.golang.org/grpc.(*Server).handleStream
/go/pkg/mod/google.golang.org/[email protected]/server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1
/go/pkg/mod/google.golang.org/[email protected]/server.go:722
If i look on the vCenter, the CNS Volume is shown in the List of CNS Volumes, with correct sizing and on an accessible Datastore.
What you expected to happen: attach/detach/deletion works properly
How to reproduce it (as minimally and precisely as possible): Unsure, we only have it happening in one environment, which uses Datastore clusters (with StorageDRS disabled tho)
Anything else we need to know?:
Environment:
- csi-vsphere version: 2.2.1
- vsphere-cloud-controller-manager version:1.20.0
- Kubernetes version:1.20.9
- vSphere version: 6.7.0.48000
- OS (e.g. from /etc/os-release): Ubuntu
Addendum: if i manually try to remove that volume from vCenter via GOVC (using an admin account on that vCenter to make sure its not an permissionissue) i get the exact same issue, so i would say that a vCenter bug
❯ govc volume.ls fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5
fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5 pvc-e5af5ae4-0dfe-49e5-b5a9-00baa1d68432
❯ govc volume.rm fb14cf3d-d549-4bbc-9c51-d20d5bd9e0a5
govc: ServerFaultCode: Received SOAP response fault from [<cs p:00007f71f8008570, TCP:localhost:443>]: retrieveVStorageObject
The object or item referred to could not be found.
And i dont think its https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vmware-vsan-67u3-release-notes.html?hWord=N4IghgNiBcIMIDkDKACA9gIwFYFMDGALigHZpEBmaArsQCYgC+QA#cloud-native-storage-issues-known as that behaviour persists since roughly a day
Troggering a resync of datastores via govc disk.ls -R
for all Datatores did not help either
@MaxRink We have the same problem. Could you solve the problem in the meantime?
sadly no, it looks like its a vCenter bug
@MaxRink I am running into a similar issue, and I am curious how do you get the vsphere-csi version information? All I see in the CSI driver pods is vsphere version information.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Maybe related to https://github.com/kubernetes-sigs/vsphere-csi-driver/issues/1416 ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign @lipingxue
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@Cellebyte: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen This is still happening with 2.7.0
@MaxRink: Reopened this issue.
In response to this:
/reopen This is still happening with 2.7.0
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We also have this issue, running CSI-Driver in Version 2.7.0 on Kubernetes 1.24.6. The message occures while detaching the disk, after a cronjob has been executed successful in Kubernetes. The disk itself is only in detached state, but not removed.
In our case after a few days we got the message "Virtual machine Consolidation needed status" in vcenter for the affected servers, where the cronjob was executed. After successful snapshot consolidation of the affected server, message is gone for a few days, until the task "Detach a virtual disk" runs into the error "The object or item referred to could not be found."
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.