gcp-compute-persistent-disk-csi-driver
gcp-compute-persistent-disk-csi-driver copied to clipboard
[wip] test: update kubetest2 version
What type of PR is this? /kind cleanup
What this PR does / why we need it:
WIP because:
- using this to exercise CI
This brings kubetest2 back up to date, and updates usage to follow the breaking change that caused us to pin to an older version
Note that this will prevent test binaries from being uploaded into GCS as part of CI results, but I suspect that wasn't intended to begin with
Which issue(s) this PR fixes:
Special notes for your reviewer:
This picks up the following kubetest2 changes: https://github.com/kubernetes-sigs/kubetest2/compare/0e09086b...26f2492dc
The breaking change mentioned: https://github.com/kubernetes-sigs/kubetest2/pull/183
Which we pinned to avoid in: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/1024
I'll open a PR in the other affected repo if I see CI pass here
Does this PR introduce a user-facing change?:
NONE
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: spiffxp
Once this PR has been reviewed and has the lgtm label, please assign saad-ali for approval by writing /assign @saad-ali
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Looks like something is more convoluted than I thought:
copying /tmp/gcp-pd-driver-tmp1442172395/kubernetes/_output/dockerized/bin/linux/amd64/kubectl to /go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/_rundir/2385803a-2b05-11ed-92a0-caaedb0733ea/kubectl
vs.
failed to run ginkgo tester: failed to validate pre-built binary kubectl (checked at "/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/overlays/stable-master/_rundir/2385803a-2b05-11ed-92a0-caaedb0733ea/kubectl"): stat _rundir/2385803a-2b05-11ed-92a0-caaedb0733ea/kubectl: no such file or directory
Not sure where deploy/kubernetes/overlays/stable-master
is coming from
EDIT: turns out it comes from https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/162f146e698588ce154ed49f3117651541224165/test/k8s-integration/driver.go#L30
Since I'm not sure where the appropriate place to undo that chdir would be, I'll just point kubetest2's --rundir
at an absolute dir
@spiffxp: The following test failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration | fda03cd259e3dfd37ee91c9aef560d429cac2c70 | link | true | /test pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/hold
There are a few files in #1046 that also need to be updated after #1046 is merged.
EDIT: turns out it comes from
https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/162f146e698588ce154ed49f3117651541224165/test/k8s-integration/driver.go#L30
Since I'm not sure where the appropriate place to undo that chdir would be, I'll just point kubetest2's
--rundir
at an absolute dir
Hmm, I suspect we should be able to run customize from the package root (?). Anyway an absolutedir sgtm.
Hey there! Just letting you know that I have just fixed and re-enabled the sanity tests for PD in go/pdcsi-oss-driver/issues/990 so if you experience failures in your sanity test PR gate build because you're on an old branch before the tests were fixed, merging the latest commits from master into this branch it should make them pass. Let me know if you have any questions!
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.