loki
loki copied to clipboard
Promtail error : has timestamp too new
Here is my configuration file config.yml: `server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients:
-
url: 'http://xx.xx.xx.xxx:3100/loki/api/v1/push' tls_config: insecure_skip_verify: true scrape_configs:
-
job_name: linux_csv1 pipeline_stages:
- regex: expression: '^(?P<Time>\d{2}-\d{2}-\d{4}.\d{2}-\d{2}-\d{2}.\d{3}),'
- timestamp: source: Time format: '02-01-2006.15-04-05.000' static_configs:
- targets:
- localhost labels: job: linux_csv1 path: /DATA/**/*.csv`
In Grafana, I only see one CSV file fetched from Promtail in Loki, but I have two files. Additionally, I am encountering this error in my Promtail logs:
Apr 19 12:13:02 gd-sv promtail-linux-amd64[179272]: level=error ts=2024-04-19T07:13:02.668612574Z caller=client.go:360 component=client host=10.xx.xx.xxx:3100 msg="final error sending batch" status=400 error="server returned HTTP status 400 Bad Request (400): entry for stream '{filename=\"/DATA/history.csv\", job=\"linux_csv1\"}' has timestamp too new: 2024-04-19 12:13:01.399 +0000 UTC" Apr 19 15:02:02 gd-sv promtail-linux-amd64[179272]: level=error ts=2024-04-19T10:02:02.192947247Z caller=client.go:360 component=client host=10.xx.xx.xxx:3100 msg="final error sending batch" status=400 error="server returned HTTP status 400 Bad Request (400): entry for stream '{filename=\"/DATA/history1.csv\", job=\"linux_csv1\"}' has timestamp too new: 2024-04-19 15:02:00.997 +0000 UTC"
I am so confused how to fix it. Can anyone please help me resolve this issue?
Questions have a better chance of being answered if you ask them on the community forums.
@anubhabmondalDirac there's some configuration issue somewhere. Promtail is gathering and sending logs with timestamps that look like there roughly 3h in the future compared to the clock time on the machine running promtail (and likely where you're running Loki as well).
The default grace period for accepting of samples in the future is 10 minutes, in grafana cloud I believe we allow up to 3h max. Search for creation_grace_period in our config docs, for example in the limits config.
You should check how your CSV files are generating logs in the future in the first place, but you could also increase that grace period value.