azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
[V2] "Should test an increase in replicas when scaling up" test case fails with ConflictingUserInput
What happened:
The test case creates a disk with two replicas and failed with the following error:
I0613 19:02:46.392680 1 azure_armclient.go:153] Send.sendRequest original response: {
"error": {
"code": "ConflictingUserInput",
"message": "Cannot attach the disk pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95 to VM /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-aovny6oi/providers/Microsoft.Compute/virtualMachines/k8s-agentpool1-14798347-1 because it is already attached to VM /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-aovny6oi/providers/Microsoft.Compute/virtualMachines/k8s-agentpool1-14798347-2. A disk can be attached to only one VM at a time.",
"target": "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-aovny6oi/providers/Microsoft.Compute/disks/pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95"
}
}
The error message seems to imply that this disk couldn’t be attached to multiple VMs. This seems odd because other replica test cases passed and I could confirm that the disk was created with maxShares of 3. What I did notice , however, was that the replica attachment occurred within a ~200ms of each other:
I0613 19:02:45.938564 1 azure_controller_standard.go:97] azureDisk - update(kubetest-aovny6oi): vm(k8s-agentpool1-14798347-1) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-aovny6oi/providers/Microsoft.Compute/disks/pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95:AttachDiskOptions{diskName: "pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95", lun: 0}])
…
I0613 19:02:46.121658 1 azure_controller_standard.go:97] azureDisk - update(kubetest-aovny6oi): vm(k8s-agentpool1-14798347-2) - attach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-aovny6oi/providers/Microsoft.Compute/disks/pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95:AttachDiskOptions{diskName: "pvc-2d229e3e-4fbf-410e-bf66-e1d63828cf95", lun: 0}])
What you expected to happen:
Test passes.
How to reproduce it:
Anything else we need to know?:
Environment:
- CSI Driver version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.