vsphere-csi-driver
vsphere-csi-driver copied to clipboard
[WIP]Temporary change PR
What this PR does / why we need it:
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #
Testing done: A PR must be marked "[WIP]", if no test result is provided. A WIP PR won't be reviewed, nor merged. The requester can determine a sufficient test, e.g. build for a cosmetic change, E2E test in a predeployed setup, etc. For new features, new tests should be done, in addition to regression tests. If jtest is used to trigger precheckin tests, paste the result after jtest completes and remove [WIP] in the PR subject. The review cycle will start, only after "[WIP]" is removed from the PR subject.
Special notes for your reviewer:
Release note:
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: inamdarm (a3fde72e202d7b821c5c85068ca23a6d84c9fb19, efe9e23d7732c770607db9d061a061e715989adf, 326fa4c400449d1e47b93016105a06e99f483384, 74b9b1a19fadb5a447c746caa1216b536dc415a8)
Hi @inamdarm. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Started vanilla Block pipeline... Build Number: 1
Started vanilla Block pipeline... Build Number: 2
Block vanilla build status: FAILURE
Stage before exit: testbed-deployment
Started vanilla Block pipeline... Build Number: 3
Block vanilla build status: FAILURE
Stage before exit: e2e-tests
Block vanilla build status: FAILURE
Stage before exit: testbed-deployment
Started vanilla Block pipeline... Build Number: 4
Started vanilla Block pipeline... Build Number: 5
Block vanilla build status: FAILURE
Stage before exit: e2e-tests
Jenkins E2E Test Results:
JUnit report was created: /home/worker/workspace/inamdarm-Block-Vanilla-E2e/Results/4/vsphere-csi-driver/tests/e2e/junit.xml
Ran 1 of 316 Specs in 440.388 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 315 Skipped
PASS
Ginkgo ran 1 suite in 8m42.369030361s
Test Suite Passed
--
Ran 14 of 316 Specs in 6389.246 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 302 Skipped
--- FAIL: TestE2E (6389.38s)
FAIL
Ginkgo ran 1 suite in 1h46m45.956184762s
Test Suite Failed
Block vanilla build status: FAILURE
Stage before exit: e2e-tests
Jenkins E2E Test Results:
JUnit report was created: /home/worker/workspace/inamdarm-Block-Vanilla-E2e@2/Results/5/vsphere-csi-driver/tests/e2e/junit.xml
Ran 1 of 316 Specs in 429.258 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 315 Skipped
PASS
Ginkgo ran 1 suite in 8m23.610170823s
Test Suite Passed
--
Ran 14 of 316 Specs in 6006.523 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 302 Skipped
--- FAIL: TestE2E (6006.68s)
FAIL
Ginkgo ran 1 suite in 1h40m22.450336538s
Test Suite Failed
Started vanilla Block pipeline... Build Number: 6
Block vanilla build status: FAILURE
Stage before exit: e2e-tests
Jenkins E2E Test Results:
JUnit report was created: /home/worker/workspace/inamdarm-Block-Vanilla-E2e/Results/6/vsphere-csi-driver/tests/e2e/junit.xml
Ran 1 of 316 Specs in 440.061 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 315 Skipped
PASS
Ginkgo ran 1 suite in 8m46.080549418s
Test Suite Passed
--
Ran 14 of 316 Specs in 6430.234 seconds
FAIL! -- 13 Passed | 1 Failed | 0 Pending | 302 Skipped
--- FAIL: TestE2E (6430.38s)
FAIL
Ginkgo ran 1 suite in 1h47m24.426037626s
Test Suite Failed
Started vanilla Block pipeline... Build Number: 11
Block vanilla build status: FAILURE
Stage before exit: testbed-deployment
Started vanilla Block pipeline... Build Number: 12
Block vanilla build status: FAILURE
Stage before exit: testbed-deployment
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: inamdarm
Once this PR has been reviewed and has the lgtm label, please assign lipingxue for approval by writing /assign @lipingxue
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.