cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
:bug: Check cluster router status before reading IP list
What this PR does / why we need it:
The latest release so far, 0.9.0 has a bug that needs to be fixed. When setting the OpenStackClusterSpec.APIServerLoadBalancer.AllowedCIDRs property, the load balancer controller tries to read the router IP list to add to add to the list of allowedCIRs from the cluster status but if the router is not set the controller crashes.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1976
Special notes for your reviewer:
- Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
- [ ] squashed commits
- if necessary:
- [ ] includes documentation
- [ ] adds unit tests
/hold
Welcome @capaca!
It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-openstack 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/cluster-api-provider-openstack has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @capaca. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Deploy Preview for kubernetes-sigs-cluster-api-openstack ready!
| Name | Link |
|---|---|
| Latest commit | a0b5cb14e51c929600fa870967c5de9b3e7d485f |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-sigs-cluster-api-openstack/deploys/660476e43b13ab0008f89a42 |
| Deploy Preview | https://deploy-preview-1977--kubernetes-sigs-cluster-api-openstack.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: capaca, mdbooth
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [mdbooth]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/test pull-cluster-api-provider-openstack-e2e-test
Oof:
Mar 27 20:13:49 capo-e2e-controller cloud-init[1759]: fatal: Remote branch stable/yoga not found in upstream origin
We're going to have to fix the 0.9 e2e tests! Hopefully just some backports from main.
Hi @mdbooth Do you know if or when we can get the e2e tests to pass? Thanks!
I'm probably not working myself until next week, but somebody might be able to help you on Slack. I would look for the changes in main which updated CI and see if they apply cleanly to 0.9.
You could also try the v0.10 alpha released yesterday!
@mdbooth I see. I'll try to reach out to people on slack. I tried the v0.10 alpha version today and it didn't work I had a few issues. I'll try to create a bug with the details.
That would be appreciated, thanks!
/test pull-cluster-api-provider-openstack-e2e-test /ok-to-test
@capaca: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| pull-cluster-api-provider-openstack-e2e-test | a0b5cb14e51c929600fa870967c5de9b3e7d485f | link | true | /test pull-cluster-api-provider-openstack-e2e-test |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.