Vishwesh Bankwar
Vishwesh Bankwar
@lachmatt I missed something earlier - I think this would cause an issue due to difference in versions. https://learn.microsoft.com/en-us/dotnet/api/system.uri.unescapedatastring?view=net-8.0#remarks Consider a service using old sdk injecting baggage and receiving side...
> @open-telemetry/dotnet-maintainers This bug makes the baggage propagation not working properly with other SDKs. @lachmatt \ @pellared - Sorry for the delay on this. I do have some questions on...
@lachmatt - please add the changelog as well.
Closing. See https://github.com/open-telemetry/opentelemetry-dotnet/pull/5489#pullrequestreview-2004593753 for details
@cijothomas - Thats a good point. I think that can be managed by upgrading the dependencies on instrumentation libraries in timely manner.
> I pushed a new commit to simplify the code, and resolved another race condition due to the bad exchange of `this.collectionTcs`. The test `PrometheusExporterMiddlewareIntegration_MultipleRequestsOfDifferentFormatsInParallel` now passed with even 100,000,000...
> > @hez2010 - Could you elaborate what was the race condition? > > We are neither using `lock (obj)` nor inserting a memory barrier around `this.collectionTcs = ...`, so...
> > This issue would surface even without any of these changes? Meaning, if we just run the test with single value passed [here](https://github.com/open-telemetry/opentelemetry-dotnet/blob/3ee60675d2e01d94c056514bd728877aacaaf564/test/OpenTelemetry.Exporter.Prometheus.AspNetCore.Tests/PrometheusExporterMiddlewareTests.cs#L271) > > Yes. My newly added...
> I can separate this PR into two parts as you mentioned, but the tests will gonna fail without the fix for the concurrency issue (I think you may have...
@hez2010 - Checking to see if you are planning to do the changes based on the discussion above?