stackdriver_exporter
stackdriver_exporter copied to clipboard
How to workaround duplicate metrics from Stackdriver
Hello there,
I'm giving it a shot here.
Is there anything we can do other than create a white/blacklist for these trouble maker metrics for the exporter?
For example, this one is from Metric: logging.googleapis.com/log_entry_count, Resource Type: spanner_instance
and we see it happen only from US-based projects. From GCP, Metrics explorer, you also see it is not really unique.
* [from Gatherer #2] collected metric "stackdriver_spanner_instance_logging_googleapis_com_byte_count" { label:<name:"instance_config" value:"" > label:<name:"instance_id" value:"spanner-001" > label:<name:"location" value:"us-east1" > label:<name:"log" value:"cloudaudit.googleapis.com/data_access" > label:<name:"project_id" value:"my-project-id" > label:<name:"severity" value:"INFO" > label:<name:"unit" value:"By" > gauge:<value:32656 > timestamp_ms:1594973553075 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_spanner_instance_logging_googleapis_com_byte_count" { label:<name:"instance_config" value:"" > label:<name:"instance_id" value:"spanner-002" > label:<name:"location" value:"us-east1" > label:<name:"log" value:"cloudaudit.googleapis.com/data_access" > label:<name:"project_id" value:"my-project-id" > label:<name:"severity" value:"INFO" > label:<name:"unit" value:"By" > gauge:<value:19902 > timestamp_ms:1594973553075 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_spanner_instance_logging_googleapis_com_byte_count" { label:<name:"instance_config" value:"" > label:<name:"instance_id" value:"spanner-003" > label:<name:"location" value:"us-east1" > label:<name:"log" value:"cloudaudit.googleapis.com/data_access" > label:<name:"project_id" value:"my-project-id" > label:<name:"severity" value:"INFO" > label:<name:"unit" value:"By" > gauge:<value:0 > timestamp_ms:1594973433075 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_spanner_instance_logging_googleapis_com_byte_count" { label:<name:"instance_config" value:"" > label:<name:"instance_id" value:"spanner-004" > label:<name:"location" value:"us-east1" > label:<name:"log" value:"cloudaudit.googleapis.com/data_access" > label:<name:"project_id" value:"my-project-id" > label:<name:"severity" value:"INFO" > label:<name:"unit" value:"By" > gauge:<value:28341 > timestamp_ms:1594973553075 } was collected before with the same name and label values
Cordially, // Nakarin
FYI, I have talked to Google Support and created a support case about these problematic metrics. They admitted that it was a bug from Google Spanner. The Spanner team is fixing this now.
Adding on here because this is related; I'm seeing the same issues with the stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_backend_request_count
metric.
This also looks to be related to #36
Update: If anyone else ends up here, this issue (at the current time) does not seem to exist with stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_backend_request_count
.
My issue was an accidental multi-inclusion of the metrics based on it matching multiple specified prefixes.
Apparently this can be fixed by specifying: --no-collector.fill-missing-labels
. Not sure about any other implications.
Hi everyone... is there a workaround for this?
We just started having this problem in the last 24h, and always with the loadbalancing.googleapis .com/https/request_count
prefix. Others seem fine. @bschaeffer did you see something similar recently?
@mrsimo Yes. With load balancing alone and also within the last 24h
Apparently this can be fixed by specifying: --no-collector.fill-missing-labels. Not sure about any other implications.
This did not work for us
We have the problem as well, affected metrics in our case are:
- stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_request_bytes_count
- stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_request_count
- stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_response_bytes_count
- stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_total_latencies
The issue started yesterday 17:30 UTC for us.
I can confirm, We had the same issues starting yesterday and also for some hours today.
Has anyone found a solution to this problem? I am getting this error while fetching Bigquery metrics.
We're seeing the same issue with the metric stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_content_mismatch
when trying to ingest metrics on Cloud Monitoring synthetics monitors. Our config:
--monitoring.metrics-ingest-delay
--monitoring.metrics-interval=5m
--monitoring.metrics-offset=0s
--monitoring.metrics-type-prefixes=monitoring.googleapis.com/uptime_check,cloudfunctions.googleapis.com/function
Seems like ours is an issue with any metric with the stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_
prefix. One thing I note is that the revision_name
label in the erroring metrics is blank.
* [from Gatherer #2] collected metric "stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_content_mismatch" { label:<name:"check_id" value:"<redacted>" > label:<name:"checked_resource_id" value:"<redacted>" > label:<name:"checker_location" value:"asia-southeast1" > label:<name:"configuration_name" value:"" > label:<name:"location" value:"asia-southeast1" > label:<name:"project_id" value:"<redacted>" > label:<name:"revision_name" value:"" > label:<name:"service_name" value:"<redacted>" > label:<name:"unit" value:"" > gauge:<value:0 > timestamp_ms:1709665370000 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_content_mismatch" { label:<name:"check_id" value:"<redacted>" > label:<name:"checked_resource_id" value:"<redacted>" > label:<name:"checker_location" value:"asia-southeast1" > label:<name:"configuration_name" value:"" > label:<name:"location" value:"asia-southeast1" > label:<name:"project_id" value:"<redacted>" > label:<name:"revision_name" value:"" > label:<name:"service_name" value:"<redacted>" > label:<name:"unit" value:"" > gauge:<value:0 > timestamp_ms:1709665310000 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_content_mismatch" { label:<name:"check_id" value:"<redacted>" > label:<name:"checked_resource_id" value:"<redacted>" > label:<name:"checker_location" value:"asia-southeast1" > label:<name:"configuration_name" value:"" > label:<name:"location" value:"asia-southeast1" > label:<name:"project_id" value:"<redacted>" > label:<name:"revision_name" value:"" > label:<name:"service_name" value:"<redacted>" > label:<name:"unit" value:"" > gauge:<value:0 > timestamp_ms:1709665200000 } was collected before with the same name and label values
* [from Gatherer #2] collected metric "stackdriver_cloud_run_revision_monitoring_googleapis_com_uptime_check_content_mismatch" { label:<name:"check_id" value:"<redacted>" > label:<name:"checked_resource_id" value:"<redacted>" > label:<name:"checker_location" value:"asia-southeast1" > label:<name:"configuration_name" value:"" > label:<name:"location" value:"asia-southeast1" > label:<name:"project_id" value:"<redacted>" > label:<name:"revision_name" value:"" > label:<name:"service_name" value:"<redacted>" > label:<name:"unit" value:"" > gauge:<value:0 > timestamp_ms:1709665130000 } was collected before with the same name and label values