cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

Add the option to disable Octavia healthmonitor creation

Open jfcavalcante opened this issue 2 years ago • 10 comments

/kind feature

Describe the solution you'd like Currently, Openstack Cloud Controller Manager have the option on its plugin to disable the creation of health-monitors on newly provisioned Load Balancers from the workload cluster.

I would like to discuss the possibility to also add this option on CAPO for the creation of the API Server Loadbalancer. I'm facing some issues where the health-monitor is breaking the LoadBalancer status during instance reconciliation, changing it to Operating Status Failed. Due to this feature is also available on OCCM, I don't see any problems implementing it on CAPO side, but I would like to discuss this before moving forward.

Anything else you would like to add:

Even if the description of the issue I'm facing isn't really deep, my team is dealing with it for several months, as we're waiting for a deep debug to really understand this infra problem, I'm able to commit the code for this feature, as I've already started to work on it.

jfcavalcante avatar Aug 13 '23 23:08 jfcavalcante

@dulek ?

mdbooth avatar Aug 30 '23 10:08 mdbooth

@dulek ?

I stated my concerns in #1644. There are no more concerns, it makes sense to allow disabling health monitors.

dulek avatar Aug 30 '23 12:08 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 27 '24 07:01 k8s-triage-robot

/remove-lifecycle stale

dulek avatar Jan 29 '24 17:01 dulek

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 28 '24 17:04 k8s-triage-robot

/remove-lifecycle rotten

EmilienM avatar Apr 29 '24 12:04 EmilienM