thanos
thanos copied to clipboard
thanos-query: deduplication picks up time-series with missing data
Thanos, Prometheus and Golang version used thanos: v0.3.1 prometheus: v2.5.0 kubernetes: v1.12.6 Kubernetes Distro: KOPS weave: weaveworks/weave-kube:2.5.0 Cloud Platform: AWS EC2 Instance Type: R5.4XL
Architecture
G1 G2
| |
| |
TQ1 TQ2
| |
-------------- |
|------------|-------------------------
| | |
TSC1 TSC2 TS
| |
P1 P2
G1: Grafana realtime G2: Grafana Historical TQ1: Thanos Query realtime (15d retention) TQ2: Thanos Query historical TSC: Thanos Sidecars TS: Thanos store
Each sidecar and the store is fronted by a service with *.svc.cluster.local
DNS to which the --store
flag points to.
G2, TQ2 are not involved in this RCA.
What happened Event Timelime:
- Due to some weave-net issues on our monitoring instance group, one of the prometheus-replicas P1 stops scraping some targets
-
We See the Following metric gap in Grafana (G1)
This particular metric was being scraped from cloudwatch-exporter
-
We investigate thanos-query and see the following deduplication behavior:


-
We can see that instead of having two series per metric we have only one, however thanos-query seems to produce contiguous data on
dedup=true
which is enabled by default. -
Later on we migrate the data of the bad prometheus pod on a new volume and make P2 live
-
We see the following data in thanos-query with
dedup=false
We can clearly see that one prometheus has data and another is missing it.
- However, when we query with
dedup=true
, the merged set displays missing data instead of contiguous data as expected.

What you expected to happen
We expected thanos deduplication to trust the series that has contiguous data over the one with the missing data and produce a series with contiguous data. Missing a scrape in HA Prometheus environment is expected at times, if one of the prometheus replicas has data the final output should not show missing data.
How to reproduce it (as minimally and precisely as possible):
- Have a setup as in examples of the repo in kubernetes or as described in the above architecture
- block network on one of the prometheus replicas so that it misses scrape and hence has gaps in data.
- make the blocked prometheus available again after a significant deltaT.
- use thanos-query to deduplicate the dataset and compare results.
Environment: Underlying K8S Worker Node:
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (e.g.
uname -a
): Linux ip-10-100-6-218 4.4.0-1054-aws #63-Ubuntu SMP Wed Mar 28 19:42:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Hi, thanks for the report :wave:
Yea, I think this is essentially some edge case for our penalty algorithm. The code is here: https://github.com/improbable-eng/thanos/blob/master/pkg/query/iter.go#L416
The problem is that this case is pretty rare (e.g we cannot repro it). I would say adding more unit tests would be nice and help to narrow down what's wrong. Help wanted (:
I am having this same issue. I can actually reproduce it by having a couple prometheus instances scraping the same target, then just rebooting (recreating the pod, in my case) a single node. It will miss one or two scrapes. You'll then start to see gaps in the data if thanos happens to query the node that was rebooted.
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.
@bwplotka if we had a data dump of one of these we should be able to extract the time series with the raw data that cause this no? In that case if someone would share a data dump like that that would help us a lot. If you feel itβs confidential data I think weβd also be open to accepting the data privately and extract the time series ourselves. That is if you trust us of course :)
Yes! we care about samples only as well so you can mask series if you want for privacy reasons! :+1: (:
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.
Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.
Looks like this is only last standing bug for offline compaction to work!
We have the same issue with v0.13.0-rc.1:
Here target has been unavailable from 4:30-7:00, and this gap is ok. But we also see gaps 10:00-now.
But the data is actually exist, here i'm changing zoom from 12h to 6h:
And then back to 12h zoom, but this time turn deduplication off (it is
--query.replica-label=replica
on querier side):
I've tried to change differernt query params (like
resolution
, partial response
etc) but only deduplication
and time range having the initial gap leads to such result.
So, it seems having metric stale
in time range leads to gaps on each replica
label change.
Here is the same 6h window, moved to time of initial gap:
And you see the gap after 10:00 appears on 6h window too.
Hello π Looks like there was no activity on this issue for last 30 days.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity for next week, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
Closing for now as promised, let us know if you need this to be reopened! π€
Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.
@bwplotka Was this already done? Or is there a config change to workaround this issue? We see the same issue with thanos 0.18.0
@onprem @kakkoyun Is there a way to reopen this issue? or better to create a new one?
Hello π Could you please try out a newer version of Thanos to if it's still valid? Of course we could reopen this issue.
@kakkoyun I've installed 0.21.1 and we're still seeing the same behaviour.
We see the same behavior. It seems like only one instance (we have 2 Prometheus instances scraping the same targets) is taken into account and the other one is completely ignored (so dedup(A, B) == A)
thanos:v0.21.1
Hello π Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
/notstale
Hello π Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
/notstale
Hello π Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
still relevant
Hello π Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
Still relevant
Adding myself here to watch this issue.
Adding myself here too.
We are seeing this issue as well. Dedup ignores a series which has no breaks in favour of one which does.
Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.
Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.
I've also had this issue for the same version, have been able to verify that all of the metrics are being received correctly, so the issue appears to be when the data is queried.
Facing a similar issue with missing metrics in v0.32.3. The metrics are being remotely written from two Prometheus replica instances, each with unique external replica labels, into the Receiver. The Receiver utilizes multiple replicas for high availability setup. However, with deduplication enabled in Thanos query, metrics are intermittently missing in Grafana.