thanos icon indicating copy to clipboard operation
thanos copied to clipboard

thanos-query: deduplication picks up time-series with missing data

Open Hashfyre opened this issue 5 years ago β€’ 29 comments

Thanos, Prometheus and Golang version used thanos: v0.3.1 prometheus: v2.5.0 kubernetes: v1.12.6 Kubernetes Distro: KOPS weave: weaveworks/weave-kube:2.5.0 Cloud Platform: AWS EC2 Instance Type: R5.4XL

Architecture


      G1                               G2
      |                                |
      |                                |
      TQ1                              TQ2
      |                                |
 --------------                        |
 |------------|-------------------------                 
 |            |                        |
TSC1        TSC2                       TS
 |            |
P1           P2

G1: Grafana realtime G2: Grafana Historical TQ1: Thanos Query realtime (15d retention) TQ2: Thanos Query historical TSC: Thanos Sidecars TS: Thanos store

Each sidecar and the store is fronted by a service with *.svc.cluster.local DNS to which the --store flag points to.

G2, TQ2 are not involved in this RCA.

What happened Event Timelime:

  • Due to some weave-net issues on our monitoring instance group, one of the prometheus-replicas P1 stops scraping some targets

Screen Shot 2019-03-25 at 7 17 00 PM

  • We See the Following metric gap in Grafana (G1) Screen Shot 2019-03-26 at 7 24 55 PM This particular metric was being scraped from cloudwatch-exporter

  • We investigate thanos-query and see the following deduplication behavior:

Screen Shot 2019-03-24 at 6 37 17 PM Screen Shot 2019-03-24 at 6 37 26 PM
  • We can see that instead of having two series per metric we have only one, however thanos-query seems to produce contiguous data on dedup=true which is enabled by default.

  • Later on we migrate the data of the bad prometheus pod on a new volume and make P2 live

  • We see the following data in thanos-query with dedup=false Screen Shot 2019-03-26 at 10 48 41 PM Screen Shot 2019-03-26 at 10 48 47 PM

We can clearly see that one prometheus has data and another is missing it.

  • However, when we query with dedup=true, the merged set displays missing data instead of contiguous data as expected.
Screen Shot 2019-03-26 at 10 48 26 PM

What you expected to happen

We expected thanos deduplication to trust the series that has contiguous data over the one with the missing data and produce a series with contiguous data. Missing a scrape in HA Prometheus environment is expected at times, if one of the prometheus replicas has data the final output should not show missing data.

How to reproduce it (as minimally and precisely as possible):

  • Have a setup as in examples of the repo in kubernetes or as described in the above architecture
  • block network on one of the prometheus replicas so that it misses scrape and hence has gaps in data.
  • make the blocked prometheus available again after a significant deltaT.
  • use thanos-query to deduplicate the dataset and compare results.

Environment: Underlying K8S Worker Node:

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a): Linux ip-10-100-6-218 4.4.0-1054-aws #63-Ubuntu SMP Wed Mar 28 19:42:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Hashfyre avatar Mar 27 '19 10:03 Hashfyre

Hi, thanks for the report :wave:

Yea, I think this is essentially some edge case for our penalty algorithm. The code is here: https://github.com/improbable-eng/thanos/blob/master/pkg/query/iter.go#L416

The problem is that this case is pretty rare (e.g we cannot repro it). I would say adding more unit tests would be nice and help to narrow down what's wrong. Help wanted (:

bwplotka avatar Mar 28 '19 18:03 bwplotka

I am having this same issue. I can actually reproduce it by having a couple prometheus instances scraping the same target, then just rebooting (recreating the pod, in my case) a single node. It will miss one or two scrapes. You'll then start to see gaps in the data if thanos happens to query the node that was rebooted.

MacroPower avatar Jan 20 '20 17:01 MacroPower

This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.

stale[bot] avatar Feb 19 '20 18:02 stale[bot]

@bwplotka if we had a data dump of one of these we should be able to extract the time series with the raw data that cause this no? In that case if someone would share a data dump like that that would help us a lot. If you feel it’s confidential data I think we’d also be open to accepting the data privately and extract the time series ourselves. That is if you trust us of course :)

brancz avatar Feb 19 '20 18:02 brancz

Yes! we care about samples only as well so you can mask series if you want for privacy reasons! :+1: (:

bwplotka avatar Feb 19 '20 19:02 bwplotka

This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.

stale[bot] avatar Apr 19 '20 10:04 stale[bot]

Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.

Looks like this is only last standing bug for offline compaction to work!

bwplotka avatar May 19 '20 06:05 bwplotka

We have the same issue with v0.13.0-rc.1: Here target has been unavailable from 4:30-7:00, and this gap is ok. But we also see gaps 10:00-now. But the data is actually exist, here i'm changing zoom from 12h to 6h: image And then back to 12h zoom, but this time turn deduplication off (it is --query.replica-label=replica on querier side): I've tried to change differernt query params (like resolution, partial response etc) but only deduplication and time range having the initial gap leads to such result. So, it seems having metric stale in time range leads to gaps on each replica label change. Here is the same 6h window, moved to time of initial gap: And you see the gap after 10:00 appears on 6h window too.

sepich avatar Jun 09 '20 13:06 sepich

Hello πŸ‘‹ Looks like there was no activity on this issue for last 30 days. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity for next week, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] avatar Jul 09 '20 16:07 stale[bot]

Closing for now as promised, let us know if you need this to be reopened! πŸ€—

stale[bot] avatar Jul 16 '20 16:07 stale[bot]

Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.

@bwplotka Was this already done? Or is there a config change to workaround this issue? We see the same issue with thanos 0.18.0

omron93 avatar May 28 '21 11:05 omron93

@onprem @kakkoyun Is there a way to reopen this issue? or better to create a new one?

omron93 avatar Jun 07 '21 10:06 omron93

Hello πŸ‘‹ Could you please try out a newer version of Thanos to if it's still valid? Of course we could reopen this issue.

kakkoyun avatar Jun 07 '21 10:06 kakkoyun

@kakkoyun I've installed 0.21.1 and we're still seeing the same behaviour.

omron93 avatar Jun 07 '21 10:06 omron93

We see the same behavior. It seems like only one instance (we have 2 Prometheus instances scraping the same targets) is taken into account and the other one is completely ignored (so dedup(A, B) == A)

thanos:v0.21.1

malejpavouk avatar Jun 24 '21 08:06 malejpavouk

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] avatar Aug 25 '21 19:08 stale[bot]

/notstale

malejpavouk avatar Aug 25 '21 20:08 malejpavouk

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] avatar Oct 30 '21 06:10 stale[bot]

/notstale

omron93 avatar Nov 04 '21 07:11 omron93

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] avatar Jan 09 '22 02:01 stale[bot]

still relevant

jmichalek132 avatar Feb 09 '22 19:02 jmichalek132

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] avatar Apr 16 '22 03:04 stale[bot]

Still relevant

omron93 avatar Apr 19 '22 08:04 omron93

Adding myself here to watch this issue.

aarontams avatar Jul 25 '22 14:07 aarontams

Adding myself here too.

clalos2592 avatar Sep 16 '22 10:09 clalos2592

We are seeing this issue as well. Dedup ignores a series which has no breaks in favour of one which does.

jamessewell avatar Oct 24 '22 01:10 jamessewell

Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.

Antiarchitect avatar Dec 09 '22 14:12 Antiarchitect

Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.

I've also had this issue for the same version, have been able to verify that all of the metrics are being received correctly, so the issue appears to be when the data is queried.

caoimheharvey avatar Apr 18 '23 06:04 caoimheharvey

Facing a similar issue with missing metrics in v0.32.3. The metrics are being remotely written from two Prometheus replica instances, each with unique external replica labels, into the Receiver. The Receiver utilizes multiple replicas for high availability setup. However, with deduplication enabled in Thanos query, metrics are intermittently missing in Grafana.

Screenshot 2024-02-09 at 12 16 46β€―PM

saikatg3 avatar Feb 09 '24 06:02 saikatg3