opentelemetry-go icon indicating copy to clipboard operation
opentelemetry-go copied to clipboard

Optimize the memory allocations of the cumulative histogram

Open MrAlias opened this issue 2 years ago • 7 comments

The current implementation of the cumulative histogram copies both it's bounds and bucket counts every time Aggregation is called. These copies mean that new slices are allocated for each call to Aggregation, one per bound and per attribute set's counts. This incurs a large memory overhead and alternative approaches should be explored.

MrAlias avatar Jul 29 '22 22:07 MrAlias

Possible solutions to explore:

  1. Add methods to the HistogramDataPoint that provides read only access to the reference types (Min, Max, bounds, and counts)
    • Alternatively, add a new HistogramDataPointRO that has methods Bounds, Counts, Min, Max, and HistogramDataPoint (which returns a copy of the HistogramDataPoint that created that HistogramDataPointRO). This type could have creation function NewHistogramDataPointRO(HistogramDataPoint) and would wrap that value in a read-only form.
  2. Unify the counts and bounds into a "bucket" type that the HistogramDataPoint would then hold a slice over. This would still require an allocation of this slice and all the values would be copied, however, it would be one allocation not two.
  3. Use a sync.pool. This might require a rework of the exporter interface as the exporters will receive the values and there needs to be a way for them to be released (maybe use SetFinalizer?)

MrAlias avatar Aug 01 '22 17:08 MrAlias

For the record, the approach I've taken in the Lightstep metrics SDK would require an exporter interface change to allow re-use of memory from one export to the next. For push-based exporters, this means effectively no allocations will be needed after the full set of attributes has been seen and after the first collection cycle.

With the ability to to re-use memory, push-based exporters automatically benefit. Pull-based exporters could use a sync.Pool to support re-use, in which case it would be able to Put items back into the pool after the export finishes.

Following the ability to re-use memory: For cumulative exporters, a Copy() method avoids allocation by re-using the destination's memory. For delta exporters, a Move() method avoids allocation by swapping the source and destination's underlying state.

jmacd avatar Aug 04 '22 17:08 jmacd

Given our aggregation implementation is guarded in an internal package, I think we could update to use a sync.Pool via our readers without affecting our API.

The only thing that would need to be done is to update the exporter interface to inform implementations of Export that the ResourceMetrics passed to it will be reused after the call returns, and if they want to hold that information after they return they need to make a copy.

MrAlias avatar Dec 22 '22 16:12 MrAlias

  • We can move this out of beta if we can prove our data structures can accommodate the performance improvements.
  • We need to concretely define what we mean by "performance improvements".

MrAlias avatar Jan 19 '23 18:01 MrAlias

The goal of reducing allocations in the histogram datatype, and ultimately across all different data outputs, can be solved by reusing allocated memory, but only if we have some way to signal it is safe to reuse some block of memory.

Approach 1, sync.Pool with the explicit release.

In this approach, I created several sync.Pools to cover the allocated portions of the Histogram, e.g. Datapoints, Bounds, and BucketCounts. I also created a function Release that would iterate through all of ResourceMetric, and Put() any of the slices allocated back into the appropriate Pool.

I also explored changing the Histogram and HistogramDataPoint to be a reference, and using a pool around them; I found similar results in both approaches.

The results of these tests were that I couldn't reduce the number of allocations in the benchmark, but I could reduce the number of bytes allocated. My working explanation for this is that because the GC would run intermittently within the benchmark, anything returned to the pool when that happened would be released and had to be reallocated. This was confirmed when I put a counter in Get(), Put(), and the pools New() functions and found that the number of Gets equaled the number of Puts (1 million range), but the number of News was not 1 but few hundred (300-700 range).

Things that warrant exploration from this:

  • Can this be done without an explicit release, using the SetFinalizer approach?
  • Does this effect still happen if there is less memory pressure? Maybe explore the benchmark with a memory ballast.

Approach 2, CollectInto().

In this approach, I added a Method to the ManualReader that is formed more like the Write() Interface where the user passes a *ResourceMetric to be filled. This had knock-on effects on the producer Interface and the internal.Aggregation Interface. In the WIP PR that I released, I have them co-existing, but because Collect() can be defined in terms of CollectInto(), I would plan for the producer and Aggregation Interfaces to be updated to have the same form as CollectInto.

The results of this approach were that I could reduce the number of allocations in the histogram benchmark to zero.

Limitations:

  • This adds an API to the Reader that isn't defined by the Spec.
  • Currently, it will only reuse the Aggregator if it is the same type. Because the pipline stores aggregations in a map and retrieves them via range if the retrieval isn't consistent between ranges (not runs as this is explicitly not guaranteed) then the Aggregators will not be reused. This can be limited by using a set of Pools for reuse.

MadVikingGod avatar Feb 09 '23 15:02 MadVikingGod

After the SIG meeting the decision was to move Collect to the CollectInto signature (func(ctx, *ResourceMetric) error).

Here is how I would think to break down the different lines of work:

  • Reshape Collect this will break in this package:
    • ManualReader
    • PeriodicReader
    • test readers
    • Prometheus Exporter
    • OC Bridge (potentially?)
  • Update the pipelines produce to have the produceInto signature. This will mean changes in
    • producer interface
    • producerHolder
    • ManualReader
    • PeridoicReader
  • Update The internal Aggregator interface to use an -Into signature This can be done in piecemeal by adding an Into Function on each of *Sum, *LastValue, and *Histogram, and then a final patch to remove the old Aggregation(), and rename the Into's to Aggregation().
    • Update FilterAggregator
    • Update Histogram Aggregators
    • update Sum Aggregators
    • Update LastValue Aggregators
    • Reshape the Aggregator Interface to use the Into Signature.

MadVikingGod avatar Feb 10 '23 14:02 MadVikingGod

With #3732 merged this should not block beta any longer.

MadVikingGod avatar Feb 21 '23 19:02 MadVikingGod

Hi, @MadVikingGod Can i pick up this?

PalanQu avatar Aug 17 '23 02:08 PalanQu

This work has been done. Currently, we reuse any metric component when Collecting

MadVikingGod avatar Jan 29 '24 15:01 MadVikingGod