Andrei Lalaev

Results 12 comments of Andrei Lalaev

It would be nice to see this feature, especially since competitors have it.

yep after downgrade to 3.2.17 everything is ok

I'd like to join kibana integration beta-testing :slightly_smiling_face:

here you can find my qryn settings: ``` qryn-read: image: qxip/qryn:3.0.26-bun restart: unless-stopped environment: CLICKHOUSE_SERVER: clickhouse CLUSTER_NAME: cluster CLICKHOUSE_PORT: 8123 CLICKHOUSE_DB: qrynprod CLICKHOUSE_AUTH: "default:{{ clickhouse.password }}" CLICKHOUSE_PROTO: http LABELS_DAYS: 3...

@akvlad ``` ┌─series_n─┬─points_n─┐ │ 976 │ 351536│ └──────────┴──────────┘ ```

Looks like I was able to almost completely get rid of the problem. I organized access to clickhouse via chproxy and added caches there. I added incremental requests and caches...

@lmangani https://github.com/metrico/qryn-oss-demo/pull/7 done

this happens on the log dashboard when the All variable is added. the query looks like this ` {k8s_cluster_name="cluster1", k8s_namespace_name=~"service1|service2|service3|service|service4|service5|service6|service7|service8|service9|service10|service11|service11|service12|service13|service14|service15|service15|service16|service17|service18|service19|service20|service21|service22|service23|service24service|service25|service26|service27|service28", level="WARN"} |= "" | json body="body" ` If there are...

@akvlad From my observations 20-30 instances of qryn write to clickhouse is much more efficient than otel collector. for some reason scaling collectors doesn't give such a boost for my...

@akvlad I have the same behavior. ``` CLICKHOUSE_SERVER: chproxy.chproxy.svc CLICKHOUSE_PORT: 9090 CLICKHOUSE_DB: qryn CLICKHOUSE_AUTH: default: CLICKHOUSE_PROTO: http CLICKHOUSE_TIMEFIELD: record_datetime CLUSTER_NAME: shard BULK_MAXAGE: 4000 BULK_MAXSIZE: 10000000 BULK_MAXCACHE: 100000 LABELS_DAYS: 7 SAMPLES_DAYS:...