Rong Hu
Rong Hu
We're also seeing this issue when using the prometheusreceiver with prometheusremotewriteexporter. It seems that the prometheusreceiver converts scrape target label as a datapoint attribute instead of as resource attribute (which...
>It seems that the prometheusreceiver converts scrape target label as a datapoint attribute instead of as resource attribute (which seems more appropriate imo) Here's my test collector config: ``` exporters:...
opt-out for generating target_info also works since prometheusreceiver already generates `up` metric which should have the same purpose as `target`. 🙇
we have a setup where the otlp sender pod is responsible for always sending a timeseries to the same collector pod in a k8s replicaset (using consistent hashing), therefore our...
This is kind of similar to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/11870. There seem to be use cases where for one reason or another, the sender's unique ID isn't necessary or desirable as a resource...
>If job and instance are not unique in your setup, is it also true that your application metrics have no overlapping attribute sets? You can avoid a single-writer violation by...
oh i meant that we're using consistent hashing upstream of the collector to maintain the uniqueness guarantee. A time series (unique combinations of metric name and label values)'s data points...
> I don't think you've explained how this avoids the single-writer violation. It sounds like you might have invalid timeseries data, if multiple collectors receive overlapping data and write it...
application pods --> metric aggregator pods --> per-pod otel-collector is a very high level description of our setup. Our metrics aggregators receive statsd metrics from applications. They aggregate metrics across...
@badjware @dashpole can we close this issue now?