thanos icon indicating copy to clipboard operation
thanos copied to clipboard

Thanos compactor backlog

Open BouchaaraAdil opened this issue 11 months ago • 8 comments

Thanos, Prometheus and Golang version used:

  • thanos: 0.34.1
  • kube-prometheus-stack-48.6.0: prometheus-v2.45.0

Object Storage Provider:

  • S3 What happened: compaction backlog: Screen Shot 2024-03-18 at 17 11 00

What you expected to happen:

  1. our setup consists of many clusters with thanos sidecars per environment that store metrics to a central s3 bucket, the compactor wasn't working properly due to slow compaction ..
  2. we scaled the compactor by running multiple compactors by sharding and that's the reason for the drop which was good
  3. we upgraded thanos to latest version 0.34.1 and scaled the compactor vertically but then we noticed that backlog is going back high again
      - args:
        - compact
        - --log.level=debug
        - --log.format=logfmt
        - --http-address=0.0.0.0:10902
        - --data-dir=/data
        - --retention.resolution-raw=180d
        - --retention.resolution-5m=365d
        - --retention.resolution-1h=0d
        - --consistency-delay=30m
        - --objstore.config-file=/conf/objstore.yml
        - --compact.concurrency=50
        - --downsample.concurrency=10
        - --block-files-concurrency=100
        - --block-meta-fetch-concurrency=100
        - --compact.blocks-fetch-concurrency=100
        - --block-viewer.global.sync-block-timeout=30m
        - --block-viewer.global.sync-block-interval=5m
        - --wait-interval=1s
        - --compact.enable-vertical-compaction
        - --deduplication.replica-label=prometheus_replica
        - --deduplication.func=penalty
        - --delete-delay=48h
        - --wait

and

        resources:
          limits:
            cpu: "20"
            memory: 20G
          requests:
            cpu: "20"
            memory: 20G

we observed some errors like Status: 500. Message: execution: sum and count timestamps not aligned on some queries

  1. Compaction Backlog: thanos_compact_todo_compactions
  2. Compaction Block Backlog: thanos_compact_todo_compaction_blocks
  3. Downsample Backlog: thanos_compact_todo_downsample_blocks
  4. Block Deletion Backlog: thanos_compact_todo_deletion_blocks

thanos_compact_todo_compactions is always 0

Full logs to relevant components:

Anything else we need to know:

2024-03-19 00:50:47.693	
caller=compact.go:1466 level=info msg="start of GC"
2024-03-19 00:50:47.688	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m48.994184246s duration_ms=468994 cached=26009 returned=25127 partial=0
2024-03-19 00:50:38.477	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18838
2024-03-19 00:50:34.009	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=100
2024-03-19 00:50:34.002	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m48.930507728s duration_ms=468930 cached=26009 returned=26009 partial=0
2024-03-19 00:42:58.691	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m3.515992146s duration_ms=363515 cached=26009 returned=25127 partial=0
2024-03-19 00:42:49.595	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18838
2024-03-19 00:42:45.071	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=100
2024-03-19 00:42:45.065	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m4.828559979s duration_ms=364828 cached=26009 returned=26009 partial=0
2024-03-19 00:36:55.168	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m57.780002178s duration_ms=477780 cached=26009 returned=25127 partial=0
2024-03-19 00:36:44.681	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18838
2024-03-19 00:36:40.236	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=100
2024-03-19 00:36:40.230	component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m42.841847446s duration_ms=462841 cached=26009 returned=26009 partial=0
2024-03-19 00:28:57.388	
caller=compact.go:1461 level=info msg="start sync of metas"
2024-03-19 00:28:57.388	
caller=tls_config.go:277 level=info service=http/server component=compact msg="TLS is disabled." http2=false address=[::]:10902
2024-03-19 00:28:57.388	
caller=tls_config.go:274 level=info service=http/server component=compact msg="Listening on" address=[::]:10902
2024-03-19 00:28:57.388	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=100
2024-03-19 00:28:57.388	
aller=http.go:73 level=info service=http/server component=compact msg="listening for requests and metrics" address=0.0.0.0:10902
2024-03-19 00:28:57.388	
aller=intrumentation.go:75 level=info msg="changing probe status" status=healthy
2024-03-19 00:28:57.388	
caller=intrumentation.go:56 level=info msg="changing probe status" status=ready
2024-03-19 00:28:57.388	
caller=compact.go:679 level=info msg="starting compact node"
2024-03-19 00:28:57.387	
caller=compact.go:405 level=info msg="retention policy of 5 min aggregated samples is enabled" duration=8760h0m0s
2024-03-19 00:28:57.387	
caller=compact.go:398 level=info msg="retention policy of raw samples is enabled" duration=4320h0m0s
2024-03-19 00:28:57.387	
caller=compact.go:256 level=info msg="vertical compaction is enabled" compact.enable-vertical-compaction=true
2024-03-19 00:28:57.387	
caller=compact.go:251 level=info msg="deduplication.replica-label specified, enabling vertical compaction" dedupReplicaLabels=prometheus_replica
2024-03-19 00:28:57.386	
caller=factory.go:53 level=info msg="loading bucket configuration"
2024-03-19 00:28:57.386	
caller=main.go:67 level=debug msg="maxprocs: Updating GOMAXPROCS=[20]: determined from CPU quota"

BouchaaraAdil avatar Mar 19 '24 00:03 BouchaaraAdil

@BouchaaraAdil Have you checked your compactor metrics? Is it in halt status?

thanos_compact_halted you can find it via this metric.

https://thanos.io/tip/operating/compactor-backlog.md/#make-sure-compactors-are-running

yeya24 avatar Mar 21 '24 07:03 yeya24

@yeya24 yes i already did the value is: 0

logs are mostly like:

	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m8.010456222s duration_ms=428010 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m8.234159055s duration_ms=428234 cached=24894 returned=24894 partial=0	
caller=blocks_cleaner.go:58 level=info msg="cleaning of blocks marked for deletion done"	
caller=blocks_cleaner.go:44 level=info msg="started cleaning of blocks marked for deletion"	
caller=clean.go:61 level=info msg="cleaning of aborted partial uploads done"	
caller=clean.go:34 level=info msg="started cleaning of aborted partial uploads"	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=8m35.588933685s duration_ms=515588 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=8m35.392333486s duration_ms=515392 cached=24894 returned=24894 partial=0	
caller=blocks_cleaner.go:58 level=info msg="cleaning of blocks marked for deletion done"	caller=blocks_cleaner.go:44 level=info msg="started cleaning of blocks marked for deletion"	
caller=clean.go:61 level=info msg="cleaning of aborted partial uploads done"	
caller=clean.go:34 level=info msg="started cleaning of aborted partial uploads"	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=8m11.753199423s duration_ms=491753 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=8m11.674345163s duration_ms=491674 cached=24894 returned=24894 partial=0	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m35.466654203s duration_ms=455466 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7m35.686672464s duration_ms=455686 cached=24894 returned=24894 partial=0	
caller=blocks_cleaner.go:58 level=info msg="cleaning of blocks marked for deletion done"	
caller=blocks_cleaner.go:44 level=info msg="started cleaning of blocks marked for deletion"	
caller=clean.go:61 level=info msg="cleaning of aborted partial uploads done"	
caller=clean.go:34 level=info msg="started cleaning of aborted partial uploads"	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m37.443088828s duration_ms=397443 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m36.845099529s duration_ms=396845 cached=24894 returned=24894 partial=0	
caller=blocks_cleaner.go:58 level=info msg="cleaning of blocks marked for deletion done"	
caller=blocks_cleaner.go:44 level=info msg="started cleaning of blocks marked for deletion"	
caller=clean.go:61 level=info msg="cleaning of aborted partial uploads done"	
caller=clean.go:34 level=info msg="started cleaning of aborted partial uploads"	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m39.587074548s duration_ms=399587 cached=24894 returned=24894 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18750	
caller=fetcher.go:407 level=debug component=block.BaseFetcher msg="fetching meta data" concurrency=60	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m40.384243402s duration_ms=400384 cached=24894 returned=24894 partial=0	
caller=fetcher.go:557 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=6m54.588252009s duration_ms=414588 cached=24894 returned=24791 partial=0	
caller=fetcher.go:820 level=debug msg="removed replica label" label=prometheus_replica count=18647	
caller=fetcher.go:867 level=debug msg="block is too fresh for now" block=01HSJP6XN8DPDBJ9XHZTTWDWRD	
caller=fetcher.go:867 level=debug msg="block is too fresh for now" block=01HSJP6XKT2BVBNEB3THRGR0WC	
caller=fetcher.go:867 level=debug msg="block is too fresh for now" block=01HSJP6X2G14XJ04WJ53CZXRKC	
caller=fetcher.go:867 level=debug msg="block is too fresh for now" block=01HSJP6WVZ01H29HPT1QGK07T5	

and the rate of compaction is very low, it starts only after recreating the compactor pod, when I removed the vertical compaction, the compaction rate was high and started to compact even fresher blocks of 2hours to 8,6 hours

Screen Shot 2024-03-22 at 11 18 55

BouchaaraAdil avatar Mar 22 '24 10:03 BouchaaraAdil

@GiedriusS any idea on Error executing query: sum and count timestamps not aligned behaviour?

BouchaaraAdil avatar Apr 02 '24 23:04 BouchaaraAdil

@yeya24 any updates?

BouchaaraAdil avatar Apr 02 '24 23:04 BouchaaraAdil

@bwplotka any idea?

BouchaaraAdil avatar Apr 08 '24 15:04 BouchaaraAdil

I'm facing the same issue. I'm using Thanos (0.36.0), Prometheus (2.53.0), and a ceph cluster for object storage. I have an HA Prometheus scraping the same set of targets, and the compaction rate is very low on the Thanos compactor. The blocks are uploading successfully to block storage via Thanos sidecar. but I have issues with the compaction and downsampling process.

thanos compactor flags:

  - args:
    - compact
    - --data-dir=/var/thanos/compact
    - --log.level=info
    - --http-address=0.0.0.0:10902
    - --objstore.config-file=/etc/objectstore/monitoring-user-s3-creds.yaml
    - --retention.resolution-raw=14d
    - --retention.resolution-5m=30d
    - --retention.resolution-1h=180d
    - --delete-delay=0
    - --wait
    - --wait-interval=15m
    - --web.disable
    - --enable-auto-gomemlimit
    - --deduplication.replica-label=prometheus_replica
    - --deduplication.func=penalty
    - --block-files-concurrency=8
    - --compact.blocks-fetch-concurrency=8
    - --compact.concurrency=8
    - --downsample.concurrency=8

Thanos compactor resources:

resources:
  limits:
    cpu: "8"
    memory: 24Gi
  requests:
    cpu: "8"
    memory: 24Gi

And the Thanos compactor is not halting, but the thanos_compact_todo_downsample_blocks metric is always 0. The backlog is getting heavier. our 2h blocks of Prometheus are about 1G.

The thanos_compact_todo_compaction_blocks metric: Screenshot 1403-05-23 at 11 01 59

Thanos compact UI: Screenshot 1403-05-23 at 11 03 21

The vertical compaction is done but the compaction level is stuck on level 2. Also, there is no error or warning in the logs. verbosity is info, but as I checked in debug mode, there was nothing found related to this issue.

Also about the resource usage, I checked this from the node with top command and it does not use half of it's resources.

navidshariaty avatar Aug 13 '24 07:08 navidshariaty

@BouchaaraAdil could you fix the issue?

navidshariaty avatar Aug 13 '24 16:08 navidshariaty

i fixed it by running multiple compactors per a list of cluster external label

BouchaaraAdil avatar Sep 10 '24 17:09 BouchaaraAdil