csi-driver-smb icon indicating copy to clipboard operation
csi-driver-smb copied to clipboard

Mount fails silently when subDir contains pvc metadata parameters

Open MiddleMan5 opened this issue 3 years ago • 9 comments

cifs mount fails silently when using templated subDir parameter.

What happened: Upgraded from v1.5.0 to v1.8.0, configurations stayed the same. Given:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-data-auto
provisioner: smb.csi.k8s.io
parameters:
  source: "//test.domain.com/test/pvc"
  subDir: "${pvc.metadata.namespace}-${pvc.metadata.name}"
  csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
  csi.storage.k8s.io/node-stage-secret-namespace: "storage"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - dir_mode=0777
  - file_mode=077
  - vers=2.0

What you expected to happen:

  1. Given pvc name my-pvc and pvc namespace test, I expect to see a PV matching this storage class that is mounted to the smb share //test.domain.com/test/pvc/test-my-pvc

  2. In the event of a mount failure; an error in the pvc or pod that indicates the mount failure (currently just says "Pending")

How to reproduce it: See above example.

Anything else we need to know?:

Relevant logs from csi-smb-controller:

E0719 22:50:02.551177       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#storage-ubuntu-deployment-pvc#pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438) mount "//test.domain.com/test/pvc" on "/tmp/pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,<masked> //test.domain.com/test/pvc /tmp/pvc-9249608c-3d32-42c3-a9f3-ba71b59fc438
Output: username specified with no parameter

... Repeats many times

Environment:

  • CSI Driver version: v1.8.0
  • Kubernetes version (use kubectl version): v1.23.0
  • OS (e.g. from /etc/os-release): linux
  • Kernel (e.g. uname -a): 5.4.0-113-generic

MiddleMan5 avatar Jul 19 '22 23:07 MiddleMan5

the error is username specified with no parameter, have you set username in smbcreds? like following:

kubectl create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD" --from-literal mountOptions="dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks"

andyzhangx avatar Jul 20 '22 06:07 andyzhangx

Yes, and other mounts without the subDir work properly. Is mountOptions in the secret strictly necessary? That seems to be a new requirement I'm not familiar with

MiddleMan5 avatar Jul 20 '22 12:07 MiddleMan5

Yes, and other mounts without the subDir work properly. Is mountOptions in the secret strictly necessary? That seems to be a new requirement I'm not familiar with

@MiddleMan5 mountOptions in the secret is not necessary.

andyzhangx avatar Jul 20 '22 12:07 andyzhangx

Is there any more information I can collect? I'm a little hazy on the theory of operation, so I wasn't sure what information was relevant.

Is there a way to disable the sanitization of the logs, or increase the verbosity?

MiddleMan5 avatar Jul 20 '22 13:07 MiddleMan5

what's the error logs in csi driver controller now?

andyzhangx avatar Jul 20 '22 13:07 andyzhangx

I just ran another test and found that even a non-templated subDir fails

Relevant Manifests

---
apiVersion: v1
kind: Secret
metadata:
  name: smbcreds
  namespace: storage
type: Opaque
data:
  username: cipkYWN0ZWQkJCQ=
  password: cipkYWN0ZWQkJCQ=
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-data-fixed
provisioner: smb.csi.k8s.io
parameters:
  source: "//test.domain.com/test/pvc"
  subDir: "subdir"
  csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
  csi.storage.k8s.io/node-stage-secret-namespace: "storage"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - dir_mode=0777
  - file_mode=077
  - vers=2.0
  - noperm
  - cache=strict
  - noserverino  # required to prevent data corruption
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-deployment-pvc
  namespace: storage
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: test-data-fixed

Logs

Output captured with kubectl logs -n storage --selector=app=csi-smb-controller -c smb -f

I0720 14:29:34.216492       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:34.216533       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:34.217088       1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:34.217105       1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:34.217371       1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:34.218371       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:34.228726       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

E0720 14:29:34.228837       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:35.232682       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:35.232733       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:35.233162       1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:35.233211       1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:35.233332       1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:35.234086       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:35.241843       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

E0720 14:29:35.241927       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:37.244848       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:37.244891       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:37.245420       1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:37.245437       1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:37.245486       1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:37.246533       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:37.253162       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

E0720 14:29:37.253236       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:41.256796       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:41.256862       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:41.257446       1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:41.257470       1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:41.257541       1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:41.258321       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:41.265797       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

E0720 14:29:41.265946       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter
I0720 14:29:49.268992       1 utils.go:76] GRPC call: /csi.v1.Controller/CreateVolume
I0720 14:29:49.269044       1 utils.go:77] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f","csi.storage.k8s.io/pvc/name":"ubuntu-deployment-pvc","csi.storage.k8s.io/pvc/namespace":"storage","source":"//test.domain.com/test/pvc","subDir":"subdir"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["dir_mode=0777","file_mode=077","vers=2.0","noperm","cache=strict","noserverino"]}},"access_mode":{"mode":5}}]}
I0720 14:29:49.270280       1 controllerserver.go:84] create subdirectory(subdir) if not exists
I0720 14:29:49.271795       1 controllerserver.go:255] internally mounting //test.domain.com/test/pvc at /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
I0720 14:29:49.271878       1 nodeserver.go:201] NodeStageVolume: targetPath(/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) volumeID(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) context(map[source://test.domain.com/test/pvc]) mountflags([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino]) mountOptions([dir_mode=0777 file_mode=077 vers=2.0 noperm cache=strict noserverino])
I0720 14:29:49.272427       1 mount_linux.go:183] Mounting cmd (mount) with arguments (-t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f)
E0720 14:29:49.279770       1 mount_linux.go:195] Mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

E0720 14:29:49.279876       1 utils.go:81] GRPC error: rpc error: code = Internal desc = failed to mount smb server: rpc error: code = Internal desc = volume(test.domain.com/test/pvc#subdir#pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f) mount "//test.domain.com/test/pvc" on "/tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f" failed with mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t cifs -o dir_mode=0777,file_mode=077,vers=2.0,noperm,cache=strict,noserverino,<masked> //test.domain.com/test/pvc /tmp/pvc-1f73c0d7-ceeb-4781-bb9d-1df57a27ea2f
Output: username specified with no parameter

Notes

  • The secret cipkYWN0ZWQkJCQ= expands to r*dacted$$$ (potentially unescaped password handling?)
  • This failure happens whether subDir is templated or not

MiddleMan5 avatar Jul 20 '22 14:07 MiddleMan5

may change the password, and retry

andyzhangx avatar Jul 20 '22 14:07 andyzhangx

Same result, both also work without subDir, so not a bad password handling issue

MiddleMan5 avatar Jul 20 '22 15:07 MiddleMan5

Okay, the issue seems to be the lack of csi.storage.k8s.io/provisioner-secret-name and csi.storage.k8s.io/provisioner-secret-namespace parameters. After adding these the mounts work again.

This behavior isn't entirely clear, and I have some questions:

  • Why does the subdirectory need to be created by the csi-controller and not the csi node itself?
  • Why not default to node-stage-secret-name creds if not provided?
  • Why does adding node-stage-secret-name parameters create a subDir without the subDir parameter; what is the use case for this behavior?

This case should be handled explicitly and an error should be returned if credentials are not present. Additionally, documentation should be added to explain the role of the csi-controller in creating the subDir, and that the provisioner secrets are mandatory when a subDir is being created.

I had to recompile the driver to add logs in order to track this issue down, additionally I found that if the username or password parameters are not provided, the cifs-utils library raises the following error (I didn't see a reference to this error anywhere, so I'm including it here):

Output: Failed to execute systemd-ask-password: No such file or directory

MiddleMan5 avatar Jul 20 '22 18:07 MiddleMan5

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 18 '22 19:10 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 07 '23 06:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 09 '23 06:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 08 '23 07:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 08 '23 07:04 k8s-ci-robot