cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Define a loadbalancer flavor name for the API server
/kind feature
Describe the solution you'd like
Our IaaS-provider provides multiple flavors for loadbalancers. To make use of one of them, in the create-call the flavor
must be given, e.g.:
openstack loadbalancer create --name standalone-lb --flavor ha_lb_tiny --description "ha_lb_tiny flavored LB" --enable --vip-subnet-id 10d89517-xxxx-yyyy-zzzz-c2cdea6c7a9d
+---------------------+----------------------------------------------------+
| Field | Value |
+---------------------+----------------------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2022-06-24T04:27:00 |
| description | hb_lb_tiny flavored LB |
| flavor_id | a9fb86ec-xxxx-yyyy-zzzz-bfe78a5b1f82 |
| id | 50c12a8f-xxxx-yyyy-zzzz-db9f6832bb2a |
| listeners | |
| name | standalone-lb |
| operating_status | OFFLINE |
| pools | |
| project_id | fd498a961d58408aa779xxxxyyyyzzzz |
| provider | amphora |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 10.6.0.43 |
| vip_network_id | 032524aa-xxxx-yyyy-zzzz-3e59cf83d588 |
| vip_port_id | 624efc71-xxxx-yyyy-zzzz-9e1fba14f82c |
| vip_qos_policy_id | None |
| vip_subnet_id | 10d89517-xxxx-yyyy-zzzz-c2cdea6c7a9d |
| tags | |
+---------------------+----------------------------------------------------+
Similar to #1221 we would like to make the API Server more configurable.
CPI Openstack already has support for this: https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/loadbalancer.go#L1713-L1715
Anything else you would like to add: We're open to implement this as feature if you general aggree with this issue.
Are you required to select a flavor and will openstack loadbalancer create
fail if not set?
Or will OpenStack select a predefined default for the flavor?
👍 +1 for aligning to CAPI.
Are you required to select a flavor and will openstack loadbalancer create fail if not set? Or will OpenStack select a predefined default for the flavor?
OpenStack can be configured to use a default flavor, which CAPO has used so far.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I also need this. I may look at a patch soon (probably in the New Year now).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
This is totally valid.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'm going to implement this and I have two questions:
- Can I add this to current API v1beta1 or do I have to wait until work on the next version of the API starts?
- Do you plan to upgrade the gophercloud library to v1.11.0 in the near future or can I do this during this works (flavor support for load balancers has been added there and I would like to use it)?
We plan to upgrade to Gophercloud v2 once it's stable (still in beta).
What about gophercloud v1.11.0? It should be compatible with current v1.7.0 version.
I'm going to implement this and I have two questions:
- Can I add this to current API v1beta1 or do I have to wait until work on the next version of the API starts?
We're going to try not to do new API versions for a really long time! The change must be backwards compatible. In this case you should be fine as long as you're just adding a field, and the behaviour if you don't specify the new field remains unchanged. If you would like an API review before spending too much time on the implementation, feel free to post a WIP PR which just makes the API change and tag me in it.
- Do you plan to upgrade the gophercloud library to v1.11.0 in the near future or can I do this during this works (flavor support for load balancers has been added there and I would like to use it)?
Feel free to bump gophercloud to the latest v1.x. Please can you do it in a separate PR, though?
We also want to move to gophercloud v2 fairly soon after it's release, but I'm expecting that to be a bit more work.
Thanks for information.
Feel free to bump gophercloud to the latest v1.x. Please can you do it in a separate PR, though?
Ok, I will create separate PR for this.