prometheus icon indicating copy to clipboard operation
prometheus copied to clipboard

Add support for exemplars during recording rule evaluation

Open vjsamuel opened this issue 1 year ago • 8 comments

This PR adds support for recording rules to emit exemplars. Would like to get some early feedback before I add things like unit test cases, feature flags (if necessary).

I wanted to add a sampling ratio to only append a subset of exemplars along with the recorded metric but would love to get feedback before hand.

vjsamuel avatar Mar 04 '23 10:03 vjsamuel

Thanks for this PR. Could you please write some words about the requirements you intended to meet? E.g. for a rule which outputs fewer series than it inputs, how are exemplars mapped from input to output series. Perhaps you can add this to #8798.

It's best if we can debate the requirements separately from the implementation.

I also fear the implementation will be expensive. One or two benchmarks might help to answer this.

bboreham avatar Mar 08 '23 10:03 bboreham

@bboreham thanks for getting back. I somehow missed this comment.

The requirements that we intend to meet overall is that given a recording/alert rule, to add exemplars into the recorded metric/alert. This can either be done as append all raw exemplars into the new series that matches the labels on the new metric or to append a sample of them (0-1 sample ratio, with a sane default limit say 0.01).

This PR has done the first part of it which is the recording rule. (I needed some help to understand how best to add it into alerting rules as the implementation right now makes it not so straight forward to do the same).

The current implementation of ExemplarAppender expects the exemplar to be appended only into a head series that already exists if not it fails. To be true to the current requirements, the exemplars from the raw series are matched against the labels of the new recorded metric and then added into the recorded series. That being said raw metric exemplars have raw series labels. Recorded metric exemplars have recorded series labels.

I felt that this is a happy compromise given that it is important for us to be able to have access to the exemplars on recorded metrics even if it means that all of the raw labels arent necessarily present.

there is one portion that i yet to be done on this PR which is to have queries be made via fanout storage instead of just local storage similar to how EngineQueryFunc does today but i wanted to get a first review in case i am totally off on the implementation :)

I can add some benchmarks if necessary. What i have refrained from is adding separate calls to exemplar storage for each new series that is being queried as it can get out of hand pretty quickly.

vjsamuel avatar Mar 14 '23 01:03 vjsamuel

E.g. for a rule which outputs fewer series than it inputs, how are exemplars mapped from input to output series.

I did not follow your answer to this question. Please could you explain it more simply for me.

bboreham avatar Mar 27 '23 17:03 bboreham

@bboreham let's say there is a metric latency which has pod, namespace, instance, le, http_path and http_verb as labels.

a recording rule histogram_quantile(0.99, sum(rate(latency[5m])) by (le, http_path, http_verb, namespace) would:

  • create a rolled up metric that is the 99th percentile of latency and only record path, verb and namespace as the series labels.
  • queries all exemplars for the raw query and tries to match them against series that were newly generated. ie, an examplar of series pod="abc", namespace="def", path="/", verb="GET", le="+Inf" would get appended into the newly created series path="/", namespace="def", verb="GET"

the reason we do this is because all exemplars that get appended into the exemplar storage need to have an already existing head series entry.

does this clarify?

vjsamuel avatar Mar 28 '23 07:03 vjsamuel

@bboreham could you kindly take a look?

vjsamuel avatar Apr 11 '23 20:04 vjsamuel

does this clarify?

Yes thank you; I think you switched names from http_path and http_verb to path and verb midway through your explanation but I get the general idea.

But this gives a new question: what happens if the query does a label_replace so the label names don't match?

bboreham avatar Apr 12 '23 09:04 bboreham

@bboreham thank you for the review. i have the same concern about how things like label_replace would work. I will do a rebase, add test cases and address all the comments that you had.

vjsamuel avatar Apr 13 '23 07:04 vjsamuel

if I run a rule count by (job)({job!=""}) does it fetch and re-append every exemplar? unfortunately yes. i have contemplated on adding a sampling ratio (0-1) to only add a few into the recorded metric unless the end user absolutely opts for 100% of the exemplars to be re-appended. do you think this is something that i should do?

what happens if the query does a label_replace so the label names don't match? if i am not wrong, they would work fine because i am only doing:

selectors := parser.ExtractSelectors(rule.vector)
	if len(selectors) < 1 {
		return nil, nil, err
	}

once the exemplar is available, i overwrite all the labels with the labels from the vector which would take care of label_replace. please correct me if my assumption is invalid.

vjsamuel avatar Apr 18 '23 06:04 vjsamuel

@bboreham , added some test cases and realized that the promql test utility doesnt support loading exemplars :) is there a simpler way to test this feature or do i figure out how to enhance the utility?

(hope you had a nice KubeCon)

vjsamuel avatar Apr 23 '23 06:04 vjsamuel

hello @vjsamuel, are you still interested to go on with this pull request and address @bboreham's comments?

Thanks!

roidelapluie avatar Jun 27 '23 11:06 roidelapluie

apologies for the massive delay @roidelapluie @bboreham, i got pulled into some internal priorities and will restart work on this in a couple of days. I will have it for review by next week.

vjsamuel avatar Jun 27 '23 15:06 vjsamuel

@bboreham i have done the crudest possible hack to add in the tests. would appreciate your guidance to formally add in the parsing logic for properly loading exemplars with exemplar labels.

vjsamuel avatar Jul 06 '23 06:07 vjsamuel

This is the new code when exemplars are turned on but there are no exemplars

BenchmarkRuleEvalWithNoExemplars
BenchmarkRuleEvalWithNoExemplars/no_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithNoExemplars/no_labels_in_recording_rule,_metric_name_in_query_result-10         	  126202	      9662 ns/op
BenchmarkRuleEvalWithNoExemplars/only_new_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithNoExemplars/only_new_labels_in_recording_rule,_metric_name_in_query_result-10   	  126406	      9486 ns/op
BenchmarkRuleEvalWithNoExemplars/some_replacement_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithNoExemplars/some_replacement_labels_in_recording_rule,_metric_name_in_query_result-10         	  125504	      9517 ns/op
BenchmarkRuleEvalWithNoExemplars/no_labels_in_recording_rule,_no_metric_name_in_query_result
BenchmarkRuleEvalWithNoExemplars/no_labels_in_recording_rule,_no_metric_name_in_query_result-10                    	   68275	     17202 ns/op

This is when the feature is turned on and has exemplars

BenchmarkRuleEvalWithExemplars
BenchmarkRuleEvalWithExemplars/no_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithExemplars/no_labels_in_recording_rule,_metric_name_in_query_result-10                         	  114679	     10341 ns/op
BenchmarkRuleEvalWithExemplars/only_new_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithExemplars/only_new_labels_in_recording_rule,_metric_name_in_query_result-10                   	  113203	     10366 ns/op
BenchmarkRuleEvalWithExemplars/some_replacement_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEvalWithExemplars/some_replacement_labels_in_recording_rule,_metric_name_in_query_result-10           	  114609	     10297 ns/op
BenchmarkRuleEvalWithExemplars/no_labels_in_recording_rule,_no_metric_name_in_query_result
BenchmarkRuleEvalWithExemplars/no_labels_in_recording_rule,_no_metric_name_in_query_result-10                      	   66717	     18087 ns/op

This is the current baseline

BenchmarkRuleEval
BenchmarkRuleEval/no_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEval/no_labels_in_recording_rule,_metric_name_in_query_result-10                                      	  136940	      8617 ns/op
BenchmarkRuleEval/only_new_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEval/only_new_labels_in_recording_rule,_metric_name_in_query_result-10                                	  137937	      8669 ns/op
BenchmarkRuleEval/some_replacement_labels_in_recording_rule,_metric_name_in_query_result
BenchmarkRuleEval/some_replacement_labels_in_recording_rule,_metric_name_in_query_result-10                        	  137877	      8624 ns/op
BenchmarkRuleEval/no_labels_in_recording_rule,_no_metric_name_in_query_result
BenchmarkRuleEval/no_labels_in_recording_rule,_no_metric_name_in_query_result-10                                   	   73915	     16336 ns/op
PASS

There seems to be a 7% slow down from the baseline when turning on the feature even when there is no exemplars.

vjsamuel avatar Aug 09 '23 03:08 vjsamuel

Tip: run your benchmarks several times to account for random slow-downs on your computer. For example -count=6. Then use the benchstat tool to analyze the timings and give you the difference and variance. Install instructions here: https://cs.opensource.google/go/x/perf

bboreham avatar Sep 06 '23 09:09 bboreham

Yup, see https://github.com/bwplotka/go-proto-bench/blob/main/Makefile#L75 as an example framework to use (make bench and make cmp)

bwplotka avatar Dec 05 '23 11:12 bwplotka