cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Add the option to disable Octavia healthmonitor creation
/kind feature
Describe the solution you'd like Currently, Openstack Cloud Controller Manager have the option on its plugin to disable the creation of health-monitors on newly provisioned Load Balancers from the workload cluster.
I would like to discuss the possibility to also add this option on CAPO for the creation of the API Server Loadbalancer. I'm facing some issues where the health-monitor is breaking the LoadBalancer status during instance reconciliation, changing it to Operating Status Failed. Due to this feature is also available on OCCM, I don't see any problems implementing it on CAPO side, but I would like to discuss this before moving forward.
Anything else you would like to add:
Even if the description of the issue I'm facing isn't really deep, my team is dealing with it for several months, as we're waiting for a deep debug to really understand this infra problem, I'm able to commit the code for this feature, as I've already started to work on it.
@dulek ?
@dulek ?
I stated my concerns in #1644. There are no more concerns, it makes sense to allow disabling health monitors.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle rotten