loki
loki copied to clipboard
Ingester OOM
Describe the bug I'm running Loki version 2.4.1 with SSD implementation model. I've noticed that ingesters are being impacted with OOM events, which kills all the nodes in the ring, until most of the nodes are in an unhealthy state.After that, the WAL process is activated, but the node is stuck in this process and with that there is no return to the ring.
It seems that the problem starts to occur when the flush queue is greater than > 0 over a period. at the same time the logs record the message: failed to flush user
.
To Reproduce Steps to reproduce the behavior:
- Started Loki (SHA or version): 2.4.1
Expected behavior The expected was that Loki was able to process all the information and not cause OOM events, where the WAL gets stuck and the nodes do not return to the ring.
Environment:
- Infrastructure: bare-metal with virtual machines
- Deployment tool: Ansible
my environment is currently deployed with 6 nodes with target=write, where the spec is 8vcpu/16gb ram.
Screenshots, Promtail config, or terminal output
Metrics
Loki Resource Usage
Loki Chunks
Loki WAL
Go Runtime
Logs
Post "https://storage.googleapis.com/upload/storage/v1/b/loki-example-logs/o?alt=json&name=fake%2F488038805ca808ed%3A17e82e70caf%3A17e81e752a%3A38fd6636&prettyPrint=false&projection=full&uploadType=multipart": context deadline exceeded
level=error ts=2022-01-28T16:08:38.985959868Z caller=flush.go:221 org_id=fake msg="failed to flush user" err="Post \"https://storage.googleapis.com/upload/storage/v1/b/loki-example-logs/o?alt=json&name=fake%2F488038805ca808ed%3A17e82e70caf%3A17e81e752a%3A38fd6636&prettyPrint=false&projection=full&uploadType=multipart\": context deadline exceeded
Config
#
# Ansible managed
#
target: write
auth_enabled: false
server:
grpc_listen_port: 9096
grpc_server_max_concurrent_streams: 2000
grpc_server_max_recv_msg_size: 209715200
grpc_server_max_send_msg_size: 209715200
http_listen_port: 3100
http_server_read_timeout: 300s
http_server_write_timeout: 300s
log_level: info
common:
ring:
heartbeat_period: 5s
heartbeat_timeout: 1m
instance_addr: 10.220.16.131
instance_interface_names:
- ens192
kvstore:
store: memberlist
zone_awareness_enabled: false
storage:
gcs:
bucket_name: loki-logs-example
request_timeout: 300s
querier:
max_concurrent: 4
query_ingesters_within: 2h
query_timeout: 60s
query_range:
align_queries_with_step: true
cache_results: false
max_retries: 5
parallelise_shardable_queries: true
results_cache:
cache:
enable_fifocache: false
split_queries_by_interval: 30m
query_scheduler:
grpc_client_config:
grpc_compression: snappy
max_recv_msg_size: 209715200
max_send_msg_size: 209715200
max_outstanding_requests_per_tenant: 1000
ingester:
autoforget_unhealthy: true
chunk_block_size: 262144
chunk_encoding: snappy
chunk_idle_period: 15m
chunk_retain_period: 30s
chunk_target_size: 1572864
concurrent_flushes: 64
flush_check_period: 15s
flush_op_timeout: 10m
lifecycler:
address: ip-address
final_sleep: 0s
ring:
kvstore:
store: memberlist
replication_factor: 2
max_chunk_age: 6h
max_transfer_retries: 0
wal:
dir: /var/lib/loki/wal
enabled: true
flush_on_shutdown: true
replay_memory_ceiling: 4GB
ingester_client:
grpc_client_config:
grpc_compression: snappy
max_recv_msg_size: 209715200
max_send_msg_size: 209715200
pool_config:
health_check_ingesters: false
remote_timeout: 60s
distributor:
ring:
kvstore:
store: memberlist
memberlist:
abort_if_cluster_join_fails: false
bind_port: 7946
dead_node_reclaim_time: 30s
gossip_to_dead_nodes_time: 15s
join_members:
- [members]
max_join_retries: 1000
rejoin_interval: 30s
schema_config:
configs:
- chunks:
period: 24h
prefix: loki_chunks_
from: 2021-12-01
index:
period: 24h
prefix: loki_index_
object_store: gcs
row_shards: 16
schema: v11
store: boltdb-shipper
storage_config:
boltdb_shipper:
active_index_directory: /var/lib/loki/data/boltdb-shipper-active
cache_location: /var/lib/loki/data/boltdb-shipper-cache
cache_ttl: 168h
shared_store: gcs
filesystem:
directory: /var/lib/loki/data/chunks
index_cache_validity: 5m
index_queries_cache_config:
enable_fifocache: false
max_chunk_batch_size: 500
compactor:
shared_store: filesystem
working_directory: /var/lib/loki/data/boltdb-shipper-compactor
limits_config:
ingestion_burst_size_mb: 200
ingestion_rate_mb: 300
ingestion_rate_strategy: local
max_query_parallelism: 8
max_query_series: 5000
per_stream_rate_limit: 500MB
per_stream_rate_limit_burst: 1000MB
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
chunk_cache_config:
enable_fifocache: false
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
enable_api: true
ring:
kvstore:
store: memberlist
rule_path: /var/lib/loki/data/rules-temp
storage:
local:
directory: /var/lib/loki/data/rules
type: local
frontend:
address: ip-address
compress_responses: true
grpc_client_config:
grpc_compression: snappy
max_recv_msg_size: 209715200
max_send_msg_size: 209715200
instance_interface_names:
- ens192
log_queries_longer_than: 5s
max_outstanding_per_tenant: 2048
note: I even looked at the issue #4713 , but it doesn't seem to be the same context. I applied the changes that were reported, but but were not effective. Including disabling FIFO cache and changing index_cache_validity
Not being able to flush chunks can definitely lead to memory issues and hitting limits.
Is there any more context around those failure logs? Could you check with debug logs?
I'm not necessarily expecting to see something but am trying to trace down what timeout is expiring to throw that context deadline exceeded
error.
Are your histograms for GCS showing 0.99? If so that looks to me like something has a 5s timeout.
Also the fact you don't see timeouts or errors in the status code makes me think this is a timeout Loki is putting on the request and cancelling it before it succeeds.
It's also surprising that uploading a 1MB file takes more than 5s, are you running in google cloud or are you pushing to google cloud from somewhere else?
I wonder if having 64 concurrent_flush threads is making things worse too, I believe we discovered fairly recently that the GCS SDK uses http2 (sorry working from memory, this may not be completely accurate), but that means that things get multiplexed over a single connection so trying to send 64 1mb chunks at the same time means most of them get backed up and would hit whatever timeout is being hit. This may seem counterintuitive but it may actually work better to lower this value, maybe try it at an extreme like 1 or 2 to see if it helps.
I'll look around and try to find whatever timeout is being hit here too.
Not being able to flush chunks can definitely lead to memory issues and hitting limits.
Is there any more context around those failure logs? Could you check with debug logs?
I'm not necessarily expecting to see something but am trying to trace down what timeout is expiring to throw that
context deadline exceeded
error.Are your histograms for GCS showing 0.99? If so that looks to me like something has a 5s timeout.
Also the fact you don't see timeouts or errors in the status code makes me think this is a timeout Loki is putting on the request and cancelling it before it succeeds.
It's also surprising that uploading a 1MB file takes more than 5s, are you running in google cloud or are you pushing to google cloud from somewhere else?
I wonder if having 64 concurrent_flush threads is making things worse too, I believe we discovered fairly recently that the GCS SDK uses http2 (sorry working from memory, this may not be completely accurate), but that means that things get multiplexed over a single connection so trying to send 64 1mb chunks at the same time means most of them get backed up and would hit whatever timeout is being hit. This may seem counterintuitive but it may actually work better to lower this value, maybe try it at an extreme like 1 or 2 to see if it helps.
I'll look around and try to find whatever timeout is being hit here too.
Hello @slim-bean , thanks for your comment and suggestions.
For now I don't have more details about the event, but I will enable the debug mode and return with the information.
That, the histogram refers to the 0.99 percentile.This screenshot was taken at one of the moments with the issue, where we can see that the duration fluctuates. Check graphs on grafana:
I am currently running loki in my on-premisses datacenter and sending the information to the bucket present in gcp.
About the concurrent_flushes comment this was a modification I made to try to mitigate/solve the problem. I'll change to the recommendations you reported and get back to you with the details. PS: the value of concurrent_flush was increased from 16 to 32 and then to 64.
@slim-bean, @kavirajk Here are more details of what I've been investigating about loki ingester. I changed concurrent_flush to 4 (default value is 16), but I didn't notice any real improvement on the identified problem. All the evidence above is related to the moment we had the memory leak problem.
The errors 5XX seen in the chart below refer to the moment the ingester is closed due to the OOM problem.
Go Runtime Memory
Profiling
go tool pprof http://localhost:3100/debug/pprof/heap
Profiling with WAL
Once the OOM problem occurs in the instance, the WAL is triggered and this is the behavior of pprof during its execution. See that it seems that the checkpoints present in the WAL are locked and it seems that it is not possible to release them, due to the errors present in the message: failed to flush user
thanks @marcusteixeira. I have been observing similar issues with 2.4.x. Eventually it gets to a point where there arent enough healthy instances in the ring and queries fail with
Query error
too many unhealthy instances in the ring
In the logs I see:
level=error ts=2021-12-20T19:46:11.981180089Z caller=flush.go:221 org_id=fake msg="failed to flush user" err="RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
level=error ts=2021-12-20T19:46:11.981210316Z caller=flush.go:221 org_id=fake msg="failed to flush user" err="RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
level=warn ts=2021-12-20T19:50:50.016717692Z caller=lifecycler.go:239 msg="found an existing instance(s) with a problem in the ring, this instance cannot become ready until this problem is resolved. The /ring http endpoint on the distributor (or single binary) provides visibility into the ring." ring=ingester err="instance 10.x.y.z:9095 past heartbeat timeout"
If it helps, I havent seen this issue in 2.3.0 and this is the primary reason I am shying away from 2.4.x upgrade
For better troubleshooting experience in this case, I am attaching the pprof file with the flamegraph example.
OOM Issue flamegraph: https://flamegraph.com/share/f1682ecd-8a77-11ec-9068-d2b21641d001
PS: Issue debug template https://github.com/grafana/loki/issues/5360
hey @marcusteixeira thanks so much for the detailed info here, sorry I didn't get back to this sooner. Really appreciate you trying out various things.
I think we need to figure out what context is getting canceled, i'm not sure there is much you can do to find this other than maybe enable debug logs??
How repeatable is this for you?
Hi @marcusteixeira I ran into the same or a very similar issue, where some of my ingesters were stuck trying to flush user data preventing them from joining the ring.
I increased the flush_op_timeout
ingester config from the default 10s, to 10m, and that seems to have fixed it, took a while for it to flush the data but eventually it completed without getting those timeouts while flushing data
@slim-bean FYI
we were talking today about logging what the WAL is doing while it replays, I'm suspicious that maybe when there is a really big wal to replay it can take a long time. There are metrics about WAL activity but none that give progress I don't believe.
I was running the ingesters with debug logging as well which told me these messages
level=debug ts=2022-02-24T23:00:04.590640241Z caller=replay_controller.go:85 msg="replay flusher pre-flush" bytes="13 GB"
level=debug ts=2022-02-24T23:03:09.412919372Z caller=replay_controller.go:85 msg="replay flusher pre-flush" bytes="15 GB"
Those numbers gave me the hint that I needed to increase the timeout even more, I had already tried to set it to 1m which was not enough, but 10m was ok in my case
@slim-bean FYI
I think we need to figure out what context is getting canceled, i'm not sure there is much you can do to find this other than maybe enable debug logs??
I activated it for a moment, but I didn't find any evidence to help me. I will enable it again and follow along.
How repeatable is this for you?
I see the presence of this behavior whenever a certain throughput happens. Example: >50 mb in distributor/ingester. After that, memory consumption starts to increase, until the node shuts down.
@rsteneteg Thanks for the flush_op_timeout parameter hints. In my environment I had already noticed this detail and it is currently set at 10m. Still, the problem remains :/
When I was debugging the issue and looking at the tracing and profiling information, I saw the possibility that the issue occurs when snappy chunk_encoding is used.
I modified the encoding to gzip and it seems to have worked as a workaround. Where I noticed that this problem started to occur with intake scales greater than 50mb/s, something like 90/100mb/s.
We use snappy everywhere, gzip compresses more than snappy but is much more cpu expensive at query time so we use snappy for the improved performance.
how many write nodes are you running @marcusteixeira and is that 50mb/s per node or over multiple?
We use snappy everywhere, gzip compresses more than snappy but is much more cpu expensive at query time so we use snappy for the improved performance.
how many write nodes are you running @marcusteixeira and is that 50mb/s per node or over multiple?
@slim-bean
The 50mb/s value is over the sum of all distributors/ingesters. This is segmented into 6 write nodes. Getting an average of 15 mb for each distributor/ingester.
sum(rate(loki_distributor_bytes_received_total[1m]))
are you using a replication factor > 1?
If so, this would multiply the amount of data each ingester processes by the replication_factor.
My sort of gut feeling here is that you are sending just enough data to each ingester that it can't upload it to the remote storage fast enough which results in increasing memory usage. I'm not sure what the bottleneck is though, is your upload connection to the cloud storage sufficiently large?
If changing to gzip allowed it to work for you I feel like you must be right on the edge of the amount of bandwidth that can be upload, gzip having a better compression ratio is helping out here.
I hit this issue in Loki 2.3.0 today. One of the ingester was terminated due to OOM and it then fails to come to ready state with
level=error ts=2022-03-17T17:30:18.120715181Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
level=error ts=2022-03-17T17:30:28.122254393Z caller=flush.go:220 org_id=fake msg="failed to flush user" err="RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
level=error ts=2022-03-17T17:30:37.494175278Z caller=redis_cache.go:72 msg="failed to get from redis" name=chunksredis err="context deadline exceeded"
level=error ts=2022-03-17T17:30:38.130330306Z caller=redis_cache.go:72 msg="failed to get from redis" name=chunksredis err="context deadline exceeded"
Redis seems fine. Only way to get this ingester to ready state is to delete the PVC and restart the pod.
@slim-bean it seems this specific ingester was using 10x more memory then the rest of them. We have replication > 1 and using snappy. Also am not sure the relation with PVC deletion to get it to ready state. Does that mean I am losing logs ?
I've run into a similar issue running a stress test on Loki. After digging around in the flusher code, I have some thoughts:
-
It seems that flush_op_timeout limits the time it takes to flush all flushable chunks within a stream(code ref). This means that if you hit a flush_op_timeout for a high throughput stream, you're fairly likely to never be able to flush it, since the number of chunks is only going to grow. This seems like it would cause unbounded memory growth and eventually an OOM kill. This interacts with the WAL as well. If a stream grows large enough to be "unflushable" within the flush_op_timeout, it likely won't be flushable during WAL recovery.
-
I was hesitant to set a timeout higher than the flush loop ticker period(
-ingester.flush-check-period
). I am worried that a longerflush_op_timeout
can cause a single stream to be flushed multiple times concurrently, implying that we would be holding in memory both a large stream, as well as the wire serialized chunks prepared for flushing. This was additionally worrisome because if enough of these piled up we would start running into storage throttling, exacerbating the problem -
Looking at the code, it would be fairly simple to limit the number of chunks per flush to some configurable threshold. This, I believe, would make it less likely that a stream would become "unflushable". If loki maintainers would entertain such a change, I'd be willing to cut a patch.
hi everyone, i am still not able to solve the issue.
i changed flush_op_timeout to 10m, changed snappy to gzip and also tried to removing PVC and restart Ingester.
My config
`auth_enabled: false
server:
log_level: debug
http_listen_port: 3100
grpc_server_max_recv_msg_size: 46137344
grpc_server_max_send_msg_size: 46137344
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- {{ include "loki.fullname" . }}-memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 3
chunk_idle_period: 45m
flush_op_timeout: 10m
max_chunk_age: 1h
concurrent_flushes: 32
flush_check_period: 15s
chunk_block_size: 262144
chunk_target_size: 1572864
chunk_encoding: gzip
chunk_retain_period: 1m
wal:
enabled: true
dir: /var/loki-wal/wal
replay_memory_ceiling: 4GB
checkpoint_duration: 3m
limits_config:
ingestion_rate_mb: 30
ingestion_burst_size_mb: 40
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
max_query_parallelism: 88
unordered_writes: true
retention_period: 744h
retention_stream:
- selector: '{cluster="prod-monitoring"}'
priority: 1
period: 168h
- selector: '{cluster="release-app-cluster"}'
priority: 2
period: 168h
- selector: '{cluster="external-vms"}'
priority: 3
period: 168h
max_global_streams_per_user: 10000
max_entries_limit_per_query: 8000
schema_config:
configs:
- from: 2020-09-07
store: boltdb-shipper
object_store: gcs
schema: v11
index:
prefix: loki_index_
period: 24h
storage_config:
index_queries_cache_config:
enable_fifocache: false
redis:
endpoint: ***
expiration : 1h
password: ***
boltdb_shipper:
shared_store: gcs
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 24h
gcs:
bucket_name: monitoring-test-bucket-1
chunk_store_config:
max_look_back_period: 0s
chunk_cache_config:
redis:
endpoint: ***
expiration : 1h
password: ***
write_dedupe_cache_config:
redis:
endpoint: ***
expiration : 1h
password: ***
table_manager:
retention_deletes_enabled: false
retention_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 1h
parallelise_shardable_queries: true
cache_results: true
results_cache:
cache:
redis:
endpoint: ***
expiration : 1h
password: ***
frontend_worker:
frontend_address: {{ include "loki.queryFrontendFullname" . }}:9095
parallelism: 100
grpc_client_config:
max_send_msg_size: 46137344
max_recv_msg_size: 46137344
frontend:
log_queries_longer_than: 15s
compress_responses: true
max_outstanding_per_tenant: 2048
tail_proxy_url: http://{{ include "loki.querierFullname" . }}:3100
ingester_client:
grpc_client_config:
max_send_msg_size: 46137344
max_recv_msg_size: 46137344
querier:
query_ingesters_within: 1h
compactor:
working_directory: /var/loki/compactor
shared_store: gcs
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx
`
Error Logs of Ingester
Also figured out few logs where we can see that push api is taking more then 10m

I think almost every instance in this ticket relates to having insufficient network bandwidth for flushing chunks to storage, or sending too much data into too few ingesters.
@afayngelerindbx has done some really helpful analysis, the code as written today doesn't seem to super gracefully recover a high volume stream gets backed up far enough, appreciate this investigation @afayngelerindbx!
But my best guess here as to how folks are getting into this situation is too much data into too few ingesters or insufficient network bandwidth between your ingesters and your object storage. We tend to favor scaling more horizontal vs vertical with our clusters. If I were to give a sort of guideline i would say 10MB/s per ingester is fairly typical for us measured with the metric container_network_receive_bytes_total
Curious what folks here who are seeing issues are sending for MB/s into their ingesters? Also be aware of what the bandwidth on the outgoing side looks like with container_network_transmit_bytes_total
especially if you are trying to run your compute at any distance from your storage. (i.e. if you are running your compute on prem and trying to use a cloud object store you're probably gonna struggle)
@afayngelerindbx
Looking at the code, it would be fairly simple to limit the number of chunks per flush to some configurable threshold. This, I believe, would make it less likely that a stream would become "unflushable". If loki maintainers would entertain such a change, I'd be willing to cut a patch.
This sounds like a very reasonable change, we'd love to see a PR to take a closer look!
@slim-bean here is what I see per second in the last minute:
container_network_receive_bytes_total Of the 6 ingester pods, couple of them are always high, fluctuating between 30-300Mbs rest of them are less than 8Mbs
container_network_transmit_bytes_total Of the 6 ingester pods, all of them are less than 10Mbs
hi @slim-bean
i am running whole Loki system on GCP. Actually i am running this Loki since 3-4 months in production but suddenly since last week this OOM issue started happening. Before this OOM Issue i was running 3 ingester with 30G of Memory and after this OOM issue i am running 5 ingester with 40G of memory.
container_network_receive_bytes_total
container_network_transmit_bytes_total
Lines Received
Bytes Received
X-posting here in case it helps folks:
I have gotten extra evidence that the issue(at least my issue) is sequential flushes. I added
"-store.chunks-cache.cache-stubs=true",
"-store.chunks-cache.cache.enable-fifocache=true",
to the ingester config. This enables a 1GB in-memory cache that enables deduping chunk storage writes. I noticed that the cache hit rates go up dramatically before OOM
(left is number of cache hits per second, right is used RAM in bytes)
Zooming in on the RHS graph, the cache hit rate does hit zero periodically, which seems to indicate that these cache hits are from individual flush loop iterations
Ok, have an update here and @DylanGuedes and I took a look at this and he is gonna work on a change,
Summary: the current code doesn't take credit for chunks which flushed successfully during a flush operation when that operation fails for any reason, which means all chunks in a stream will be flushed again when the flush op is retried.
Because a flush operation writes to the chunk cache this would also explain the sudden massive increase of writes to the cache.
Increasing the flush_op_timeout can reduce the likelyhood of this happening but it's more of a crude workaround than a fix.
What we are thinking currently is to change the flushChunks
method to essentially operate one chunk at a time (serialize to wirechunk, write to store, mark as flushed), this way we make sure to take credit for any chunks that are successfully flushed and not flush them again. It should also reduce memory pressure a little as it will limit the memory used for serialized chunks to one at a time.
Also thanks so much @afayngelerindbx for digging on this, your sleuthing has been a big help, and all the graphs and due diligence by everyone else has been super helpful too. Sorry progress on this has been slow but really appreciate all the input by everyone. I feel like we are heading in the right direction now.
Hi Guys, This is issue is solved for me by changing the disk type used by wal-data PVC.
Previously i had pd-standard disk type in gcp and saw that I/O Utilization is reaching 100%.
After updating the disk to pd-ssd it started working fine.

I don't know how i missed this thread, but here is my discussion on Loki slack regarding a similar problem. It includes certain changes we made which helped our case.
https://grafana.slack.com/archives/CEPJRLQNL/p1655495351572109
Hi. any updates ?
stale
Hi. any updates ?
Could you share your configuration file here and your environment details? I initially reported this issue, but I am no longer affected by OOM cases.