opentelemetry-collector icon indicating copy to clipboard operation
opentelemetry-collector copied to clipboard

Stricter memory limiter

Open cergxx opened this issue 2 years ago • 3 comments
trafficstars

Is your feature request related to a problem? Please describe. In my collector I have a linear pipeline: receiver -> memory_limiter -> batch -> exporter, and the exporter has an enabled queue. The queue size is set to a large number, so that the data is never dropped. Instead, I use the memory limiter to refuse the data when there is a congestion on the exporter side (queue grows in size, memory usage grows along). Existing memory limiter fails to do it properly – sometimes it does not stop accepting new records fast enough – it waits for the GC to free some memory and the collector faces the OOM.

Describe the solution you'd like I have 2 proposals:

  1. Add a "strict" mode to the memory limiter, so that it starts refusing incoming data as soon as the memory usage goes above the limit, not after calling the GC.
  2. Add some sort of limiter that would inspect the exporter queue size and refuse data in receiver, when the queue grows over a limit. Or at least to be able to inspect the exporter queue size in my custom receiver/processor and do the refuse part myself.

Describe alternatives you've considered Proposal 1 – fork the existing memory limiter. Proposal 2 – I could achieve the same behaviour by implementing such approach:

  • Add one more processor to the pipeline, right before the exporter. This processor would increment a counter of current queue size.
  • Decrement this counter in the exporter when it starts processing the batch.
  • Inspect this counter in the receiver (or add one more processor, before the batch) and return an error there if queue is full. But this solution looks too complex.

cergxx avatar Oct 17 '23 16:10 cergxx