cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

Define a loadbalancer flavor name for the API server

Open bavarianbidi opened this issue 2 years ago • 12 comments

/kind feature

Describe the solution you'd like Our IaaS-provider provides multiple flavors for loadbalancers. To make use of one of them, in the create-call the flavor must be given, e.g.:

openstack loadbalancer create --name standalone-lb --flavor ha_lb_tiny --description "ha_lb_tiny flavored LB" --enable --vip-subnet-id 10d89517-xxxx-yyyy-zzzz-c2cdea6c7a9d                                        
+---------------------+----------------------------------------------------+                                                                                                                                                                      
| Field               | Value                                              |                                                                                                                                                                      
+---------------------+----------------------------------------------------+                                                                                                                                                                      
| admin_state_up      | True                                               |                                                                                                                                                                      
| availability_zone   | None                                               |                                                                                                                                                                      
| created_at          | 2022-06-24T04:27:00                                |                                                                                                                                                                      
| description         | hb_lb_tiny flavored LB                             |                                                                                                                                                                      
| flavor_id           | a9fb86ec-xxxx-yyyy-zzzz-bfe78a5b1f82               |                                                                                                                                                                      
| id                  | 50c12a8f-xxxx-yyyy-zzzz-db9f6832bb2a               |                                                                                                                                                                      
| listeners           |                                                    |                                                                                                                                                                      
| name                | standalone-lb                                      |                                                                                                                                                                      
| operating_status    | OFFLINE                                            |                                                                                                                                                                      
| pools               |                                                    |                                                                                                                                                                      
| project_id          | fd498a961d58408aa779xxxxyyyyzzzz                   |                                                                                                                                                                      
| provider            | amphora                                            |                                                                                                                                                                      
| provisioning_status | PENDING_CREATE                                     |                                                                                                                                                                      
| updated_at          | None                                               |                                                                                                                                                                      
| vip_address         | 10.6.0.43                                          |                                                                                                                                                                      
| vip_network_id      | 032524aa-xxxx-yyyy-zzzz-3e59cf83d588               |                                                                                                                                                                      
| vip_port_id         | 624efc71-xxxx-yyyy-zzzz-9e1fba14f82c               |                                                                                                                                                                      
| vip_qos_policy_id   | None                                               |                                                                                                                                                                      
| vip_subnet_id       | 10d89517-xxxx-yyyy-zzzz-c2cdea6c7a9d               |                                                                                                                                                                      
| tags                |                                                    |                                                                                                                                                                      
+---------------------+----------------------------------------------------+

Similar to #1221 we would like to make the API Server more configurable.

CPI Openstack already has support for this: https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/loadbalancer.go#L1713-L1715

Anything else you would like to add: We're open to implement this as feature if you general aggree with this issue.

bavarianbidi avatar Jun 24 '22 04:06 bavarianbidi

Are you required to select a flavor and will openstack loadbalancer create fail if not set? Or will OpenStack select a predefined default for the flavor?

👍 +1 for aligning to CAPI.

seanschneeweiss avatar Jun 28 '22 07:06 seanschneeweiss

Are you required to select a flavor and will openstack loadbalancer create fail if not set? Or will OpenStack select a predefined default for the flavor?

OpenStack can be configured to use a default flavor, which CAPO has used so far.

apricote avatar Jun 28 '22 08:06 apricote

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 17 '22 11:10 k8s-triage-robot

/remove-lifecycle stale

bavarianbidi avatar Oct 17 '22 11:10 bavarianbidi

I also need this. I may look at a patch soon (probably in the New Year now).

mkjpryor avatar Dec 08 '22 11:12 mkjpryor

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 08 '23 11:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 07 '23 12:04 k8s-triage-robot

/remove-lifecycle rotten

This is totally valid.

dulek avatar Apr 07 '23 12:04 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 06 '23 12:07 k8s-triage-robot

/remove-lifecycle stale

dulek avatar Jul 06 '23 15:07 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 23 '24 23:01 k8s-triage-robot

/remove-lifecycle stale

dulek avatar Jan 29 '24 17:01 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 28 '24 17:04 k8s-triage-robot

/remove-lifecycle stale

pawcykca avatar Apr 29 '24 07:04 pawcykca

I'm going to implement this and I have two questions:

  • Can I add this to current API v1beta1 or do I have to wait until work on the next version of the API starts?
  • Do you plan to upgrade the gophercloud library to v1.11.0 in the near future or can I do this during this works (flavor support for load balancers has been added there and I would like to use it)?

pawcykca avatar Apr 29 '24 09:04 pawcykca

We plan to upgrade to Gophercloud v2 once it's stable (still in beta).

EmilienM avatar Apr 29 '24 12:04 EmilienM

What about gophercloud v1.11.0? It should be compatible with current v1.7.0 version.

pawcykca avatar Apr 29 '24 14:04 pawcykca

I'm going to implement this and I have two questions:

  • Can I add this to current API v1beta1 or do I have to wait until work on the next version of the API starts?

We're going to try not to do new API versions for a really long time! The change must be backwards compatible. In this case you should be fine as long as you're just adding a field, and the behaviour if you don't specify the new field remains unchanged. If you would like an API review before spending too much time on the implementation, feel free to post a WIP PR which just makes the API change and tag me in it.

  • Do you plan to upgrade the gophercloud library to v1.11.0 in the near future or can I do this during this works (flavor support for load balancers has been added there and I would like to use it)?

Feel free to bump gophercloud to the latest v1.x. Please can you do it in a separate PR, though?

We also want to move to gophercloud v2 fairly soon after it's release, but I'm expecting that to be a bit more work.

mdbooth avatar Apr 30 '24 10:04 mdbooth

Thanks for information.

Feel free to bump gophercloud to the latest v1.x. Please can you do it in a separate PR, though?

Ok, I will create separate PR for this.

pawcykca avatar Apr 30 '24 11:04 pawcykca