opentelemetry-collector
opentelemetry-collector copied to clipboard
[service] add logger provider configuration support
This allows us to use the otel-go/config package to support configuring external destinations for logs. I'm putting this in draft to gather community feedback on whether this is a desirable feature for the collector.
I used the following configuration with this PR to send data to an OTLP backend:
telemetry:
logs:
processors:
- batch:
exporter:
otlp:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
This allowed me to see logs in my backend:
While I think the ability to directly configure the destination of collector logs in the telemetry section might be cool for small and simple configurations in certain environments where you can not simply pick up your looks using system or file logs.
But I feel it would be more intuitive if we could specify a specific pipeline for exporting the data.
Something like this:
exporter:
otlp/honeycomb:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
pipeline:
telemetry:
metrics:
verbosity: detailed
pipelines: [logs/collector]
logs:
verbosity: detailed
pipelines: [logs/collector]
logs/collector:
receivers: [telemetry/logs]
processors: [batch]
exporters: [otlp/honeycomb]
metrics/collector:
receivers: [telemetry/metrics]
processors: [batch]
exporters: [otlp/honeycomb]
traces/collector:
...
Compared to:
telemetry:
logs:
processors:
- batch:
exporter:
otlp:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
metrics:
processors:
- batch:
exporter:
otlp:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
traces:
...
wdyt?
@frzifus there was a decision made some time ago (i'm struggling to find a link with this info) to not pipe the collector's own telemetry through itself, to avoid the scenario of the collector's own telemetry being interrupted by an overloaded collector. This is why I'm proposing supporting the logging provider directly.
Are you thinking of the scenario where you'd like to be able to apply some transformations to the logs?
This PR was marked stale due to lack of activity. It will be closed in 14 days.
This PR was marked stale due to lack of activity. It will be closed in 14 days.
This PR was marked stale due to lack of activity. It will be closed in 14 days.
Codecov Report
Attention: Patch coverage is 91.37931% with 5 lines in your changes missing coverage. Please review.
Project coverage is 91.43%. Comparing base (
cb19e1d) to head (809edbe). Report is 1 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| service/service.go | 33.33% | 3 Missing and 1 partial :warning: |
| service/telemetry/factory_impl.go | 50.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #10544 +/- ##
==========================================
- Coverage 91.43% 91.43% -0.01%
==========================================
Files 435 435
Lines 23712 23752 +40
==========================================
+ Hits 21682 21718 +36
- Misses 1653 1656 +3
- Partials 377 378 +1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@open-telemetry/collector-approvers this PR should be ready for review
this feature is not yet added to the docs it seems. is there already a docs issue created for it?
@mowies I don't think there is one, feel free to create one :)
created: https://github.com/open-telemetry/opentelemetry.io/issues/5680