cluster-api-provider-cloudstack
cluster-api-provider-cloudstack copied to clipboard
DiskOffering in CloudStackMachine CRD should be a pointer to CloudStackResourceDiskOffering
Recreated PR https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/328 by @hrak with minor fixups
Issue #, if available:
#326
Description of changes:
This is a breaking change in the (unreleased) v1beta3 API which changes the DiskOffering property of CloudStackMachine to be a pointer to CloudStackResourceDiskOffering
Reasoning is described in the above linked issue.
Testing performed:
make test
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Deploy Preview for kubernetes-sigs-cluster-api-cloudstack ready!
| Name | Link |
|---|---|
| Latest commit | 932469e2f0fa128491de57014c3dd74e4d253c9c |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-cluster-api-cloudstack/deploys/665a2d7aff90370008f73c87 |
| Deploy Preview | https://deploy-preview-359--kubernetes-sigs-cluster-api-cloudstack.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
/ok-to-test
Test Results : (tid-434) Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8 Kubernetes Version: v1.27.2 Kubernetes Version upgrade from: v1.26.5 Kubernetes Version upgrade to: v1.27.2 CloudStack Version: 4.19 Template: ubuntu-2004-kube E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr359-sl-434.zip
Summarizing 3 Failures:
[FAIL] When testing affinity group [It] Should have host affinity group when affinity is pro
/jenkins/workspace/capc-e2e-new/test/e2e/common.go:332
[FAIL] When testing resource cleanup [BeforeEach] Should create a new network when the specified network does not exist
/jenkins/workspace/capc-e2e-new/test/e2e/resource_cleanup.go:62
[FAIL] When testing app deployment to the workload cluster with slow network [ToxiProxy] [It] Should be able to download an HTML from the app deployed to the workload cluster
/jenkins/workspace/capc-e2e-new/test/e2e/deploy_app_toxi.go:135
Ran 28 of 29 Specs in 9415.561 seconds
FAIL! -- 25 Passed | 3 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (9415.56s)
FAIL
/approve /retest
/approve /run-e2e -c 4.19
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: rohityadavcloud, vishesh92, weizhouapache
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [rohityadavcloud,vishesh92,weizhouapache]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@weizhouapache a jenkins job has been kicked to run test with following paramaters:
- kubernetes version: 1.27.2
- CloudStack version: 4.19
- hypervisor: kvm
- template: ubuntu-2004-kube
- Kubernetes upgrade from: 1.26.5 to 1.27.2
Setting up environment failed
Test Results : (tid-445) Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8 Kubernetes Version: v1.27.2 Kubernetes Version upgrade from: v1.26.5 Kubernetes Version upgrade to: v1.27.2 CloudStack Version: 4.19 Template: ubuntu-2004-kube E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr359-sl-445.zip
Summarizing 2 Failures:
[FAIL] When testing project [AfterEach] Should create a cluster in a project
/jenkins/workspace/capc-e2e-new/test/e2e/project.go:103
[FAIL] When testing affinity group [It] Should have host affinity group when affinity is pro
/jenkins/workspace/capc-e2e-new/test/e2e/common.go:348
Ran 29 of 30 Specs in 9232.045 seconds
FAIL! -- 27 Passed | 2 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (9232.05s)
FAIL
It's possible that I missed something, but isn't v1beta3 already released? If this is a breaking change, don't we need v1beta4?
As per my understanding, this will allow users migrate to v1beta3 without any loss in metadata when they are upgrading CAPC.
Test Results : (tid-466) Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8 Kubernetes Version: v1.27.2 Kubernetes Version upgrade from: v1.26.5 Kubernetes Version upgrade to: v1.27.2 CloudStack Version: 4.19 Template: ubuntu-2004-kube E2E Test Run Logs: https://github.com/blueorangutan/capc-prs/releases/download/capc-pr-ci-cd/capc-e2e-artifacts-pr359-sl-466.zip
Summarizing 3 Failures:
[FAIL] When testing app deployment to the workload cluster with slow network [ToxiProxy] [It] Should be able to download an HTML from the app deployed to the workload cluster
/jenkins/workspace/capc-e2e-new/test/e2e/deploy_app_toxi.go:135
[FAIL] When testing affinity group [It] Should have host affinity group when affinity is pro
/jenkins/workspace/capc-e2e-new/test/e2e/common.go:348
[FAIL] When testing project [AfterEach] Should create a cluster in a project
/jenkins/workspace/capc-e2e-new/test/e2e/project.go:103
Ran 29 of 30 Specs in 10007.088 seconds
FAIL! -- 26 Passed | 3 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (10007.09s)
FAIL
It's possible that I missed something, but isn't v1beta3 already released? If this is a breaking change, don't we need v1beta4?
As per my understanding, this will allow users migrate to
v1beta3without any loss in metadata when they are upgrading CAPC.
Could you help me understand how this will work? Looking at the code, there are conversions to and from v1beta3 to other versions. An existing CloudStackMachine on v1beta3 cant mutate to use a pointer to diskOffering. Its recommended to do only additive changes to released APIs, this seems like it might not be additive in nature, unless I'm missing something.
It's possible that I missed something, but isn't v1beta3 already released? If this is a breaking change, don't we need v1beta4?
As per my understanding, this will allow users migrate to
v1beta3without any loss in metadata when they are upgrading CAPC.Could you help me understand how this will work? Looking at the code, there are conversions to and from v1beta3 to other versions. An existing CloudStackMachine on v1beta3 cant mutate to use a pointer to diskOffering. Its recommended to do only additive changes to released APIs, this seems like it might not be additive in nature, unless I'm missing something.
@hrak was the original author of the PR. So, he would have more and better context about this PR. I recreated the PR to fix some minor fixes and conflicts.
I just went through the PR again to understand the changes. This PR is to fix #326 and PR https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/332 fixed the metadata across different versions.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@vignesh-goutham let me know if we can merge this?
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Codecov Report
Attention: Patch coverage is 18.09524% with 86 lines in your changes missing coverage. Please review.
Project coverage is 26.72%. Comparing base (
d597e80) to head (932469e). Report is 110 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #359 +/- ##
==========================================
+ Coverage 25.66% 26.72% +1.05%
==========================================
Files 59 62 +3
Lines 5563 6006 +443
==========================================
+ Hits 1428 1605 +177
- Misses 3996 4251 +255
- Partials 139 150 +11
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.