azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
[V2] detachment errors never returned to AzVolume.Status.Error
What happened:
The verification function verifyObjectFailedOrDeleted
, which is used in conditionwaiter, should use pointer as its case type. With this bug, this function never returns error back when it fails to delete the object, so it just works as same as verifyObjectDeleted
.
https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/ec9a74cdd929f6c24e1a5b9f82bb5f34578683f3/pkg/controller/common.go#L1120
Once we fix the bug, a performance regression could happen. When garbage collection of replica azvolumeattachments is triggered, cleanUpAzVolumeAttachmentByVolume
will be added to operation queue; if it returns error, it will be requeued. But this is unnecessary since replica controller can take care of failed detachments, we actually can replace the condition with verifyObjectDeleted
.
https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/858e85d80476491b841032b796e8af6bfc01dd1a/pkg/controller/shared_state.go#L1206
But if we do that, cleanUpAzVolumeAttachmentByVolume
never returns errors to AzVolume and controllerserver.DeleteVolume
won't receive GRPC result from AzVolume Status either (that will be seen when user kubectl describe pv
).
https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/944b31cccffa459b5a6707783a14f26ddc40850a/pkg/provisioner/crdprovisioner.go#L447-L465
Thanks @sunpa93's input for this issue.
What you expected to happen: Expect to surface the detachment errors to AzVolume errors without a performance regression.
How to reproduce it:
Anything else we need to know?:
Environment:
- CSI Driver version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
/kind bug
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.