opentelemetry-collector
opentelemetry-collector copied to clipboard
Collector sending otelcol_exporter_queue_size metric on single exporter
Hi all, I have the following issue while working with OTel Collector, and I can't seem to find anything on docs or useful config parameters for avoiding this issue. Feel free to ask for more details if needed, and if providing such other details is compatible with the reason some information is being redacted. Thanks in advance for your help.
Describe the bug The metric otelcol_exporter_queue_size is being sent to prometheus for only one exporter instead of each one.
What did you expect to see? I expect to see a queue metric for each exporter.
What did you see instead? I see the aforementioned metric for only the first service initialised by collector at startup. I checked on Grafana and in the timeline, each time container restarts, a different exporter queue on that metric is exposed. There are no related errors on logs, each exporter is configured the same way, including the one initialised at startup of OTel collector container.
What version did you use? ADOT v.0.39.1
What config did you use?
Prometheus receiver config:
prometheus/
Service conf:
service:
[...]
metrics/
Prometheus exporter config:
exporters:
prometheusremotewrite:
endpoint:
Metrics and logs level are already set at maximum verbosity, other pieces of the config are omitted on purpose.
Environment Docker container of OTel Collector, tagged latest
I'm experiencing exactly the same thing. Is it right for the system to work like this?
Can confirm I am seeing this as well. Here is our information
Example Config
I spawned a collector example just to confirm:
receivers:
filelog:
include: [/var/log/busybox/simple.log]
storage: file_storage/filelogreceiver
extensions:
file_storage/filelogreceiver:
directory: /tmp
file_storage/otlpoutput:
directory: /tmp
service:
extensions: [file_storage/filelogreceiver, file_storage/otlpoutput]
pipelines:
logs:
receivers: [filelog]
exporters: [splunk_hec/first,splunk_hec/second]
processors: []
exporters:
splunk_hec/first:
endpoint: [redact]
token: [redact]
sending_queue:
enabled: true
queue_size: 10000
storage: file_storage/otlpoutput
splunk_hec/second:
endpoint: [redact]
token: [redact]
sending_queue:
enabled: true
queue_size: 5000
storage: file_storage/otlpoutput
When curling localhost:8888/metrics, only one exporter queue metric is returned:
# HELP otelcol_exporter_queue_capacity Fixed capacity of the retry queue (in batches)
# TYPE otelcol_exporter_queue_capacity gauge
otelcol_exporter_queue_capacity{exporter="splunk_hec/first",service_instance_id="707adf3b-d2bd-435f-be31-c30b53735c2e",service_name="otelcontribcol",service_version="0.103.0-dev"} 10000
# HELP otelcol_exporter_queue_size Current size of the retry queue (in batches)
# TYPE otelcol_exporter_queue_size gauge
otelcol_exporter_queue_size{exporter="splunk_hec/first",service_instance_id="707adf3b-d2bd-435f-be31-c30b53735c2e",service_name="otelcontribcol",service_version="0.103.0-dev"} 0
I am trying this out with collector v0.103
Hi @dmitryax,
Have you any idea on this problem?
Thanks a lot!