prometheus-adapter icon indicating copy to clipboard operation
prometheus-adapter copied to clipboard

No Namespace Queries not working

Open aroelo opened this issue 2 years ago • 1 comments

I'm trying to use an external metric to scale a workload using the HPA, but the external metric is in a different namespace than the workload.

I have set up the external rule as described in the docs here: https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/externalmetrics.md#namespacing

In my case like this:

    - seriesQuery: 'queueSize{queue=~"my_queue_.*"}'
      resources:
        namespaced: false
      metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>, queue=~"my_queue_.*"}) by (<<.GroupBy>>)'

However when I list the metrics it still says that it is namespaced, from the output of kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1":

    {
      "name": "queueSize",
      "singularName": "",
      "namespaced": true,
      "kind": "ExternalMetricValueList",
      "verbs": [
        "get"
      ]
    } 

I'm using the latest release (v0.10.0) for the prometheus adapter.

Could anyone help me figure out why this keeps using the namespace in the query? Am I missing something?

aroelo avatar Sep 01 '22 12:09 aroelo

I am facing same issue. Below mentioned is my configuration for prometheus.

rules:
  default: false
  external:
  - seriesQuery: '{__name__="http_requests_total",path!="",job="router"}'
    metricsQuery: '(sum (rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by(path))'  
    resources:
      namespaced: false

Output

kubectl get --raw /apis/external.metrics.k8s.io/v1beta1

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "external.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "http_requests_total",
      "singularName": "",
      "namespaced": true,
      "kind": "ExternalMetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

Series looks like this Screenshot from 2022-09-13 11-16-29

shubham-bansal96 avatar Sep 13 '22 05:09 shubham-bansal96

I have the same issue of it saying the resource is namespaced. However I am successfully querying that metric from an HPA in a different namespace.

    - type: External
      external:
        metric:
          name: queueSize
          selector:
            matchLabels:
              somelabel: somevalue
        target:
          type: Value
          value: 1

So I think the display of the metric is lying only.

Joibel avatar Nov 01 '22 14:11 Joibel

Actually, in my case at least, I can issue the query to prom adapter with any namespace (even a non-existent one), and it'll return the correct value for that metric.

Joibel avatar Nov 02 '22 09:11 Joibel

From my interpretation of the docs here, this is because k8s requires the resource to be namespaced, so it cannot report "namespaced": false

However, that does not mean the namespace is included in the query labels - the namespace label is excluded if you set the following on the external rule.

      resources:
        namespaced: false

Because of this, the Prometheus Adapter will return the same external metrics no matter what namespace you specify (it just ignores the namespace, as @Joibel says)

For example: kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/foo/confluent_kafka_server_consumer_lag_offsets" | jq and kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/bar/confluent_kafka_server_consumer_lag_offsets" | jq return the same metrics, even though namespaces foo and bar do not even exist.

jhwbarlow avatar Nov 17 '22 14:11 jhwbarlow

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 15 '23 15:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 17 '23 15:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 16 '23 15:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 16 '23 15:04 k8s-ci-robot