VictoriaMetrics
VictoriaMetrics copied to clipboard
Question: Small Lag when Querying metrics from graphite vs victoria metrics
We are using carbon-c-relay to send grapphite metrics to existing graphite cluster and victoria metrics, noticed a 1m delay between graphite vs victoria metrics (we use carbonapi for Querying from Grafana).
Tried changing -search.latencyOffset=0s
but didnt notice any difference.
Apart from this everything seems working fine.
How do we completely disable latencyOffset delay in VictoriaMetrics?.
Hi @cjagus! -search.latencyOffset
can't be disabled and its minimum value is 1ms.
What happens if you manually push data via curl
and immediately after that query data back? Do you observe the same 1m lag?
VictoriaMetrics buffers incoming data in memory and flushes it to disk every second. This is needed for improving data ingestion performance. Cluster version of VictoriaMetrics has additional in-memory buffers for incoming data at vminsert
and vmstorage
. So the pushed data may remain invisible in queries for a few seconds. VictoriaMetrics provides /internal/force_flush
HTTP handler, which forcibly flushes the in-memory buffers to disk, so recently pushed data becomes available for reading. Read more about this handler here. It isn't recommended to use this handler on a regular basis, since it may hurt data ingestion performance.
I would also like to be able to disable -search.latencyOffset
.
Use case: I have unit tests that write data with a timestamp of "now" and then immediately try to read that data. I'm using force_flush
after I write, which has been sufficient to fix other unit tests that write data with past timestamps, but that's not sufficient to expose data with current timestamps because of the latency offset. I have also tried adjusting -search.latencyOffset
to be so low so as not to matter, but it appears to have a floor of 1s, rather than the 1ms suggested by @hagen1778 above.
Aside from its interference with our unit tests, the latency offset doesn't seem to have any upside for our application, and has some downsides. It appears to be primarily intended to work around inconsistent timing with remote writes from Prometheus, but my application doesn't ingest data that way. (We have a data collection process that writes to it using /api/v1/import
.) Because our data comes from various IoT devices that may have inconsistent connectivity and data point timing, there's no value I could set for -search.latencyOffset
that would guarantee that my latest data points all line up when I am aggregating across series -- I always have to accept that my latest value (or even my latest few values, when doing a range query) could be incomplete and account for this in the application code. But the latencyOffset does hide some data from me that could be useful, for instance when I'm only looking at one series (which is fairly common) and I want to know what the very latest value for that series is.
@valyala do you see any implications of allowing to use 0 as search.latencyOffset
value and prevent it from falling back to 1s?
@valyala do you see any implications of allowing to use 0 as search.latencyOffset value and prevent it from falling back to 1s?
I think it is OK to allow setting -search.latencyOffset
to zero. This has been implemented in the commit 10cf6c9781d18cd48e8ef991b3252c575db498a9 . This commit will be included in the next release.
VictoriaMetrics allows setting zero value for -search.latencyOffset
command-line flag starting from v1.88.0. Closing the issue as resolved then.