csi-driver-smb
csi-driver-smb copied to clipboard
Mount fails silently when subDir contains pvc metadata parameters
cifs mount fails silently when using templated subDir parameter.
What happened: Upgraded from v1.5.0 to v1.8.0, configurations stayed the same. Given:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-data-auto
provisioner: smb.csi.k8s.io
parameters:
source: "//test.domain.com/test/pvc"
subDir: "${pvc.metadata.namespace}-${pvc.metadata.name}"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "storage"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=077
- vers=2.0
What you expected to happen:
-
Given pvc name
my-pvcand pvc namespacetest, I expect to see a PV matching this storage class that is mounted to the smb share//test.domain.com/test/pvc/test-my-pvc -
In the event of a mount failure; an error in the pvc or pod that indicates the mount failure (currently just says "Pending")
How to reproduce it: See above example.
Anything else we need to know?:
Relevant logs from csi-smb-controller:
E0719 22:50:02.551177 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#storage-ubuntu-deployment-pvc#pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438) mount "//test.domain.com/test/pvc" on "/tmp/pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,<masked> //test.domain.com/test/pvc /tmp/pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438
Output: username specified with no parameter
... Repeats many times
Environment:
- CSI Driver version: v1.8.0
- Kubernetes version (use
kubectl version): v1.23.0 - OS (e.g. from /etc/os-release): linux
- Kernel (e.g.
uname -a): 5.4.0-113-generic
the error is username specified with no parameter, have you set username in smbcreds? like following:
kubectl create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD" --from-literal mountOptions="dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks"
Yes, and other mounts without the subDir work properly. Is mountOptions in the secret strictly necessary? That seems to be a new requirement I'm not familiar with
Yes, and other mounts without the subDir work properly. Is mountOptions in the secret strictly necessary? That seems to be a new requirement I'm not familiar with
@MiddleMan5 mountOptions in the secret is not necessary.
Is there any more information I can collect? I'm a little hazy on the theory of operation, so I wasn't sure what information was relevant.
Is there a way to disable the sanitization of the logs, or increase the verbosity?
what's the error logs in csi driver controller now?
I just ran another test and found that even a non-templated subDir fails
Relevant Manifests
---
apiVersion: v1
kind: Secret
metadata:
name: smbcreds
namespace: storage
type: Opaque
data:
username: cipkYWN0ZWQkJCQ=
password: cipkYWN0ZWQkJCQ=
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-data-fixed
provisioner: smb.csi.k8s.io
parameters:
source: "//test.domain.com/test/pvc"
subDir: "subdir"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "storage"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=077
- vers=2.0
- noperm
- cache=strict
- noserverino # required to prevent data corruption
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-deployment-pvc
namespace: storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: test-data-fixed
Logs
Output captured with kubectl logs -n storage --selector=app=csi-smb-controller -c smb -f
I0720 14:29:34.216492 1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:34.216533 1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:34.217088 1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:34.217105 1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:34.217371 1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:34.218371 1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:34.228726 1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
E0720 14:29:34.228837 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:35.232682 1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:35.232733 1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:35.233162 1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:35.233211 1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:35.233332 1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:35.234086 1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:35.241843 1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
E0720 14:29:35.241927 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:37.244848 1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:37.244891 1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:37.245420 1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:37.245437 1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:37.245486 1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:37.246533 1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:37.253162 1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
E0720 14:29:37.253236 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:41.256796 1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:41.256862 1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:41.257446 1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:41.257470 1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:41.257541 1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:41.258321 1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:41.265797 1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
E0720 14:29:41.265946 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:49.268992 1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:49.269044 1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:49.270280 1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:49.271795 1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:49.271878 1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:49.272427 1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:49.279770 1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
E0720 14:29:49.279876 1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
Notes
- The secret
cipkYWN0ZWQkJCQ=expands tor*dacted$$$(potentially unescaped password handling?) - This failure happens whether
subDiris templated or not
may change the password, and retry
Same result, both also work without subDir, so not a bad password handling issue
Okay, the issue seems to be the lack of csi.storage.k8s.io/provisioner-secret-name and csi.storage.k8s.io/provisioner-secret-namespace parameters. After adding these the mounts work again.
This behavior isn't entirely clear, and I have some questions:
- Why does the subdirectory need to be created by the csi-controller and not the csi node itself?
- Why not default to
node-stage-secret-namecreds if not provided? - Why does adding
node-stage-secret-nameparameters create a subDir without the subDir parameter; what is the use case for this behavior?
This case should be handled explicitly and an error should be returned if credentials are not present. Additionally, documentation should be added to explain the role of the csi-controller in creating the subDir, and that the provisioner secrets are mandatory when a subDir is being created.
I had to recompile the driver to add logs in order to track this issue down, additionally I found that if the username or password parameters are not provided, the cifs-utils library raises the following error (I didn't see a reference to this error anywhere, so I'm including it here):
Output: Failed to execute systemd-ask-password: No such file or directory
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.