cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[occm] Support Octavia/Amphora Prometheus endpoint creation using annotations

Open antonin-a opened this issue 1 year ago • 16 comments

Component: openstack-cloud-controller-manager (occm)

FEATURE REQUEST?:

/kind feature

As a Kubernetes + occm user I would like to be able to create Prometheus endpoint (a listener with a special protocol "PROMETHEUS") so that I can easily monitor my Octavia Load Balancers using Prometheus.

What happened: Currently the only way to do so is to use Openstack CLI / APIs openstack loadbalancer listener create --name stats-listener --protocol PROMETHEUS --protocol-port 9100 --allowed-cidr 10.0.0.0/8 $os_octavia_id

What you expected to happen: Create the Prometheus endpoint using annotations at Loadbalancer creation (Kubernetes service type LoadBalancer)

Annotations that we suggest to add:

kind: Service
metadata:
  name: octavia-metrics
  annotations:
    loadbalancer.openstack.org/metrics-enable: "true"
    loadbalancer.openstack.org/metrics-port: "9100"
    loadbalancer.openstack.org/metrics-allow-cidrs: "10.0.0.0/8, fe80::/10"
    loadbalancer.openstack.org/vip-address: "10.4.2.3" #  Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs) 
  labels:
    app: test-octavia
spec:
  ports:
  - name: client
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer 

Anything else we need to know?: Related Octavia documentation: https://docs.openstack.org/octavia/latest/user/guides/monitoring.html#monitoring-with-prometheus

As an Openstack Public Cloud Provider we are currently working on a custom CCM implementation, for this reason we can potentially do the PR associated with this request, but we'd like to at least validate the implementation before starting developments.

antonin-a avatar Nov 03 '23 13:11 antonin-a

I see this as a valid feature request. I think I'd rather skip metrics-enable annotation and assume that if metrics-port is set, we should enable the metrics listener. What I don't like here is exposing the VIP address to the end user. I guess using FIP to reach the metrics doesn't work due to security concerns?

dulek avatar Nov 03 '23 16:11 dulek

Hello @dulek,

Most of the time, your Prometheus scrapper will be deployed in your K8S cluster. If you're scraping from a node (of the cluster), your request will go through the router to reach the FIP. If so, you will need to add the (Openstack or not) router egress IP in the Prometheus Listener's allowed-ip list to allow the client.

IMO, it's better for an integration in a K8S cluster.

Lucas,

Lucasgranet avatar Nov 03 '23 16:11 Lucasgranet

@Lucasgranet: Fair enough, I guess this is the only way forward then.

@jichenjc, do you think exposing LB VIP IP on the Service might potentially be dangerous?

dulek avatar Nov 06 '23 17:11 dulek

Hello @dulek , any update on this one ?

antonin-a avatar Nov 27 '23 16:11 antonin-a

Hello @dulek , any update on this one ?

I've asked @jichenjc for an opinion in my previous comment. @zetaab might have something to say too.

All the being said I don't have free cycles to work on this, as it's not a use case for us. We'll definitely welcome a contribution from your side.

dulek avatar Nov 28 '23 17:11 dulek

do you think exposing LB VIP IP on the Service might potentially be dangerous?

loadbalancer.openstack.org/vip-address: "10.4.2.3" # Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs)

sorry saw this just now , I am not security expert but seems no harm as anyway we have to provide the LB info for some connections ? but for normal app user (the one who create the service) need to understand the detail of LB underhood which in other service creation template I seems didn't see that before

jichenjc avatar Nov 29 '23 01:11 jichenjc

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 27 '24 02:02 k8s-triage-robot

/remove-lifecycle stale

kbudde avatar Feb 27 '24 06:02 kbudde

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 27 '24 06:05 k8s-triage-robot

/remove-lifecycle stale

We will work on it

antonin-a avatar May 27 '24 07:05 antonin-a

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 25 '24 08:08 k8s-triage-robot