opentelemetry-collector-contrib
opentelemetry-collector-contrib copied to clipboard
[receiver/purefa] Go metrics should be ignored
Component(s)
receiver/purefa
What happened?
Currently, the Go metrics are being included in the Pure FA receiver, but should be ignored.
Collector version
current main
Environment information
No response
OpenTelemetry Collector configuration
No response
Log output
No response
Additional context
cc @dgoscn
We could consider creating a ‘include exporter go performance metrics if available’ configuration item that’s off by default.
The problem with these go metrics rarely have context like what endpoint was called since that’s not a label on the exporter.
I can also see a few cases where these metrics will no longer be exposed in the future.
If it’s going to take more than 15 minutes to implement the config item; it’s probably not worth it.
go metrics rarely have context like what endpoint was called since that’s not a label on the exporter
We should be able to add more labels there.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
- receiver/purefa: @jpkrohling @dgoscn @chrroberts-pure
See Adding Labels via Comments if you do not have permissions to add labels yourself.
go metrics rarely have context like what endpoint was called since that’s not a label on the exporter
We should be able to add more labels there.
Of course.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
- receiver/purefa: @jpkrohling @dgoscn @chrroberts-pure
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
- receiver/purefa: @jpkrohling @dgoscn @chrroberts-pure
See Adding Labels via Comments if you do not have permissions to add labels yourself.
@dgoscn, is this still in your radar?
Hey @jpkrohling, yes, of course. I will keep you updated on this issue for this week.
@dgoscn any progress on this?
Since Dynatrace are deprecating their own exporter in favour of the native otlphttp exporter
the logs are full of errors relating to dropped go_
metrcis
2023-11-13T07:42:57.003-0500 error exporterhelper/retry_sender.go:145 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "otlphttp", "error": "Permanent error: OTLP partial success: The following issues were encountered while ingesting OTLP metrics:\nErrors:\nUnsupported metric: 'go_memstats_mallocs_total' - Reason: UNSUPPORTED_METRIC_TYPE_MONOTONIC_CUMULATIVE_SUM\nUnsupported metric: 'go_gc_duration_seconds' - Reason: UNSUPPORTED_METRIC_TYPE_SUMMARY\nWarnings:\nMetric key too short - normalized from: 'up' to: 'up_' - Reason: METRIC_KEY_TOO_SHORT\n (2 rejected)", "dropped_items": 100}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/collector/[email protected]/exporterhelper/retry_sender.go:145
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
go.opentelemetry.io/collector/[email protected]/exporterhelper/metrics.go:176
go.opentelemetry.io/collector/exporter/exporterhelper.(*queueSender).start.func1
go.opentelemetry.io/collector/[email protected]/exporterhelper/queue_sender.go:126
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).Start.func1
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
- receiver/purefa: @jpkrohling @dgoscn @chrroberts-pure
See Adding Labels via Comments if you do not have permissions to add labels yourself.
ping @dgoscn
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
- receiver/purefa: @jpkrohling @dgoscn @chrroberts-pure
See Adding Labels via Comments if you do not have permissions to add labels yourself.
I would like to take a look
Hi! @claudiobastos this may be closed.
these metrics are now longer exported with the latest version of the Pure Storage exporter.
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/commit/95c532b9c7f335a1aa083fff75ee8b40adaf12d3