pipeline
pipeline copied to clipboard
webhook_request_latencies_bucket metric keeps adding new data series and became unusable
Expected Behavior
Prometheus metric webhook_request_latencies_bucket is usable in real environment, don't add new data series forever. Prometheus is able to query that metric.
Actual Behavior
Prometheus metric webhook_request_latencies_bucket creates so many data series that is practically impossible to query in prometheus (too much data). It keeps adding new series while it's running so number of series increase forever. Restart pod tekton-pipelines-webhook resets number of series and fixes issue.
Steps to Reproduce the Problem
Run tekton-pipelines-webhook.
Additional Info
- Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:33:59Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Tekton Pipeline version:
Client version: 0.12.0
Pipeline version: v0.15.0
Triggers version: v0.7.0
I can take a look at it if no one else is. But it may take a longer time for me.
@ImJasonH will this be suitable as a good first issue?
/assign ywluogg
cc @NavidZ since this relates to metrics
Dropping this here for context. The webhook_request_latencies_bucket (and others) metrics is heavily influenced by the labels in question here: https://github.com/knative/pkg/pull/1464/files
Removing the labels in that pull request might help reduce the number of unique webhook_request_latencies_bucket metrics the webhook has to manage.
Aside from this, I don't know if theres a way to configure the metrics code to purge metrics from the in memory store after a period of time. This would help too. Most of the time, the in memory stuff is sent to a backend like Prometheus, stack driver, etc anyway.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale
@ywluogg are you still looking into this?
@vdemeester looks like this issue would be addressed by TEP-0073: Simplify metrics, right?
Hi jerop@ I'm not looking into this anymore. Please unassign me. Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/assign @QuanZhang-William
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale
We have a very similar problem. Many metrics have a resource_namespace label. In our case, these namespaces have randomly generated names and live for a short time. This leads to a very high cardinality for the resource_namespace label in about one week. That huge number of series results in a growing memory consumption.
I agree with @eddie4941 that configuring the metrics code to purge metrics from the in memory store after a period of time would help.
Based on the discussion in the API WG: /assign @khrm
@pritidesai: GitHub didn't allow me to assign the following users: khrm.
Note that only tektoncd members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
Based on the discussion in the API WG: /assign @khrm
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign @khrm
@pritidesai This was fixed by https://github.com/knative/pkg/pull/1464
So we can close this.
/close
@khrm: You can't close an active issue/PR unless you authored it or you are a collaborator.
In response to this:
@pritidesai This was fixed by https://github.com/knative/pkg/pull/1464
So we can close this.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@khrm not only resource_name label but also resource_namespace label can contribute to this "high cardinality" issue. To fix it for every use case one would need to purge metrics from the in memory store after a period of time.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
This issue is still relevant. See this comment as well as this.
We have the same issue, too.
I have a proposal for knative/pkg at https://github.com/knative/pkg/pull/2931.
knative/pkg now gives the option to exclude arbitrary tags. I assume the next action item is to bump knative/pkg and customize the webhook options.