ADOT lambda layer with trace exporter adds ~130 ms to billed duration for each lambda invocation
I have a hello world empty .NET 6 lambda with v0.68 ADOT lambda layer. collector.yaml:
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
exporters:
otlp:
endpoint: ${NEW_RELIC_OPENTELEMETRY_ENDPOINT}
headers:
api-key: ${NEW_RELIC_LICENSE_KEY}
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
CloudWatch logs for the lambda with ADOT lambda layer that doesn't send traces to the collector receiver (I added a custom sampler that drops all traces): REPORT RequestId: fc897c5e-9caf-4409-89ea-9d1d5d2e370d Duration: 27.09 ms Billed Duration: 28 ms Memory Size: 2048 MB Max Memory Used: 123 MB XRAY TraceId: 1-63ebbac7-6927d234289ecdfb2df6d4a8 SegmentId: 311415df27555941 Sampled: true
CloudWatch logs for the same lambda which sends 1 trace per invocation to the collector receiver: REPORT RequestId: 2b72d6e8-4589-4b89-9698-715ae8a9e9f7 Duration: 160.41 ms Billed Duration: 161 ms Memory Size: 2048 MB Max Memory Used: 126 MB XRAY TraceId: 1-63ebbb3a-3e2cfb8851cb3f95162d8a9d SegmentId: 5485d6a96f24cdbe Sampled: true
The billed duration difference is around 130 ms. Lambda memory size doesn't affect the billed duration overhead. Traces are sent to the New Relic endpoint. According to New Relic, their OTLP endpoint returns responses very fast, far less than 100 ms. It looks like batching for the collector exporters is disabled and traces are sent to New Relic for each lambda invocation: https://github.com/Aneurysm9/opentelemetry-lambda/blob/fd2c4c91fba2c1ad22a653e1dd8dd94ddcec023b/collector/internal/confmap/converter/disablequeuedretryconverter/converter.go#L75
This is probably, possibly related to https://github.com/open-telemetry/opentelemetry-lambda/issues/263
This issue was marked stale. It will be closed in 30 days without additional activity.
Please try the latest collector layer release with the decouple processor automatically added. This sounds like exactly the problem that processor was designed to resolve.