carbon-c-relay icon indicating copy to clipboard operation
carbon-c-relay copied to clipboard

aggregate

Open superopsdev opened this issue 6 years ago • 9 comments
trafficstars

aggregate ^.pre..<<app_metric>>.m1_rate ^.pre..<<app_metric>>.m5_rate ^.pre..<<app_metric>>.p98 ^.pre..<<app_metric>>.p75 ^.pre.*.<<app_metric>>.mean every 10 seconds expire after 35 seconds compute sum write to .pre.<app_metric>.m1_rate compute sum write to .pre.<app_metric>.m5_rate compute max write to .pre.<app_metric>.p98 compute max write to .pre.<app_metric>.p75 compute max write to .pre.<app_metric>.mean ; I config the aggregate like this,why can not recevice the aggregate data?

superopsdev avatar Jul 17 '19 12:07 superopsdev

aggregate ^.pre..<<app_metric>>.m1_rate ^.pre..<<app_metric>>.m5_rate ^.pre..<<app_metric>>.p98 ^.pre..<<app_metric>>.p75 ^.pre.*.<<app_metric>>.mean every 10 seconds expire after 35 seconds compute sum write to .pre.<app_metric>.m1_rate compute sum write to .pre.<app_metric>.m5_rate compute max write to .pre.<app_metric>.p98 compute max write to .pre.<app_metric>.p75 compute max write to .pre.<app_metric>.mean ;

superopsdev avatar Jul 17 '19 12:07 superopsdev

have you tried running your config through the test mode (-t) of the relay? E.g.:

$ carbon-c-relay -t -f your.conf
...
aggregate
        ^.pre..foo.bar.m1_rate
        ^.pre..foo.bar.m5_rate
        ^.pre..foo.bar.p98
        ^.pre..foo.bar.p75
        ^.pre.*.foo.bar.mean
    every 10 seconds
    expire after 35 seconds
    timestamp at end of bucket
    compute sum write to
        .pre.foo.bar.m1_rate
    compute sum write to
        .pre.foo.bar.m5_rate
    compute max write to
        .pre.foo.bar.p98
    compute max write to
        .pre.foo.bar.p75
    compute max write to
        .pre.foo.bar.mean
    ;

.pre..foo.bar.m1_rate
match
    * -> .pre..foo.bar.m1_rate
    file(foo)
        /dev/stdout
aggregation
    ^.pre..foo.bar.m1_rate (regex) -> .pre..foo.bar.m1_rate
    sum -> .pre.foo.bar.m1_rate
    sum -> .pre.foo.bar.m5_rate
    max -> .pre.foo.bar.p98
    max -> .pre.foo.bar.p75
    max -> .pre.foo.bar.mean

My best bet is that you didn't paste your config closely enough, and if you did, that you forgot that . means any char in a regex, so it might match more than you want. Consider using send to on the aggregate rule if you want to avoid a metric loop inside the relay.

grobian avatar Jul 18 '19 18:07 grobian

aggregate ^([^.]+).prod.[^.]+.([^.]+).m1_rate ^([^.]+).prod.[^.]+.([^.]+).m5_rate ^([^.]+).prod.[^.]+.([^.]+).p98 ^([^.]+).prod.[^.]+.([^.]+).p75 ^([^.]+).prod.[^.]+.([^.]+).mean every 10 seconds expire after 35 seconds compute sum write to \1.prod.\2.m1_rate compute sum write to \1.prod.\2.m5_rate compute max write to \1.prod.\2.p98 compute max write to \1.prod.\2.p75 compute max write to \1.prod.\2.mean ;

cluster local_carbon forward 127.0.0.1:1003 ; I configure aggregate with the above configuration rules, but the QPS are 1,000 times more than before. what is my problem?

superopsdev avatar Jul 19 '19 17:07 superopsdev

The rules of qps I config is wrong? what is the problem?

superopsdev avatar Jul 22 '19 03:07 superopsdev

what do you refer to with qps? the actual values produced by the aggregation?

grobian avatar Jul 22 '19 06:07 grobian

^([^.]+).prod.[^.]+.([^.]+).m1_rate this iteam`s value is wrong 。 The value of this index is 0-1 after carbon-aggregator.py polymerization, but the result after carbon-c-relay polymerization is several K, much more obviously.

superopsdev avatar Jul 22 '19 09:07 superopsdev

^([^.]+).prod.[^.]+.([^.]+).m1_rate is input, so if it's wrong, where does it come from. If you're referring to \1.prod.\2.m1_rate being wrong, and exactly 1000x, then I'm still very interested in what your input looks like, perhaps it parses it wrong.

grobian avatar Jul 22 '19 09:07 grobian

This is written in the code.

App. prod. host - 01. _user_getInfo. m1_rate This is a stand-alone machine, the value obtained is correct. If it is aggregated by multiple hosts (host-01, host-02), the aggregated value will be very extraordinary.

superopsdev avatar Jul 22 '19 10:07 superopsdev

is the max close to what you expect?

grobian avatar Jul 23 '19 17:07 grobian