azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
[V2] test: Pod failover with replica attachment test case fails on Windows
What happened:
The "Should test pod failover and check for correct number of replicas" test case fails on Windows because the pod doesn't get the expected file contents after failover:
May 21 19:31:20.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=azuredisk-4455 exec azuredisk-volume-tester-znz26-5486d6f796-j2bm8 -- cmd /c type C:\mnt\test-1\data.txt'
May 21 19:31:22.123: INFO: stderr: ""
May 21 19:31:22.123: INFO: stdout: "hello world\r\n"
May 21 19:31:22.123: INFO: The stdout did not contain output "hello world\r\nhello world\r\n" in pod "azuredisk-volume-tester-znz26-5486d6f796-j2bm8", found: "hello world\r\n".
What you expected to happen:
Test case passes.
How to reproduce it:
Anything else we need to know?:
Environment:
- CSI Driver version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from /etc/os-release): Windows
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
Test case is still failing periodically.
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten