Setting configuration in code doesn't seem to be honored
Describe the bug The documentation suggests that configuration set in code should take precedence, but this doesn't seem to be true in practice. When setting configuration in code I can see 2 distinct configure lines in the tracer debug log, with the second containing the field values set in code:
2022-04-15 17:10:46.867 +00:00 [INF] DATADOG TRACER CONFIGURATION - {"date":"2022-04-15T17:10:46.8553995+00:00","os_name":"Linux","os_version":"Unix 4.14.268.205","version":"2.6.0.0","platform":"x64","lang":".NET","lang_version":"6.0.2","env":null,"enabled":true,"service":"Eight.AppApi.RestApi","agent_url":"http://172.31.77.80:8126","debug":true,"health_checks_enabled":false,"analytics_enabled":false,"sample_rate":null,"sampling_rules":null,"tags":[],"log_injection_enabled":false,"runtime_metrics_enabled":false,"disabled_integrations":[],"routetemplate_resourcenames_enabled":true,"routetemplate_expansion_enabled":false,"partialflush_enabled":false,"partialflush_minspans":500,"runtime_id":"267121e2-914b-42db-96da-286a7f3c6679","agent_reachable":true,"agent_error":"","appsec_enabled":false,"appsec_trace_rate_limit":100,"appsec_rules_file_path":"(default)","appsec_libddwaf_version":"(none)","direct_logs_submission_enabled_integrations":[],"direct_logs_submission_enabled":false,"direct_logs_submission_error":"","dd_trace_methods":""} { MachineName: ".", Process: "[17 Eight.AppApi.RestApi]", AppDomain: "[1 Eight.AppApi.RestApi]", AssemblyLoadContext: "\"\" Datadog.Trace.ClrProfiler.Managed.Loader.ManagedProfilerAssemblyLoadContext #1", TracerVersion: "2.6.0.0" }
2022-04-15 17:10:47.063 +00:00 [INF] DATADOG TRACER CONFIGURATION - {"date":"2022-04-15T17:10:47.0189685+00:00","os_name":"Linux","os_version":"Unix 4.14.268.205","version":"2.4.4.0","platform":"x64","lang":".NET","lang_version":"6.0.2","env":"staging","enabled":true,"service":"app-api","agent_url":"http://172.31.77.80:8126/","debug":true,"health_checks_enabled":false,"analytics_enabled":false,"sample_rate":null,"sampling_rules":null,"tags":[],"log_injection_enabled":false,"runtime_metrics_enabled":false,"disabled_integrations":[],"routetemplate_resourcenames_enabled":true,"partialflush_enabled":false,"partialflush_minspans":500,"runtime_id":"267121e2-914b-42db-96da-286a7f3c6679","agent_reachable":true,"agent_error":"","appsec_enabled":false,"appsec_trace_rate_limit":100,"appsec_rules_file_path":"(default)","appsec_libddwaf_version":"(none)","direct_logs_submission_enabled_integrations":[],"direct_logs_submission_enabled":false,"direct_logs_submission_error":""} { MachineName: ".", Process: "[17 Eight.AppApi.RestApi]", AppDomain: "[1 Eight.AppApi.RestApi]", AssemblyLoadContext: "\"Default\" System.Runtime.Loader.DefaultAssemblyLoadContext #2", TracerVersion: "2.4.4.0" }
However, when I login and view my APM dashboard, it's clear only the first takes precendence (e.g. the data has no env or version tag). The only way I can reliably configure these tags is via the environment variables.
To Reproduce
- Docker image with tracer installed as part of image, minimal env vars to start the tracer:
export CORECLR_ENABLE_PROFILING=1
export CORECLR_PROFILER={846F5F1C-F9AE-4B07-969E-05C26BC060D8}
export CORECLR_PROFILER_PATH=/opt/datadog/Datadog.Trace.ClrProfiler.Native.so
export DD_DOTNET_TRACER_HOME=/opt/datadog
- .NET application with code to configure settings:
var settings = TracerSettings.FromDefaultSources();
settings.Environment = environment;
settings.ServiceName = serviceName;
settings.ServiceVersion = version;
settings.Exporter.AgentUri = new Uri($"http://{agentHost}:8126/");
Tracer.Configure(settings);
Expected behavior Setting configuration in code should override env vars
Screenshots N/A
Runtime environment (please complete the following information):
- Instrumentation mode: Installation on container
RUN mkdir -p /opt/datadog \
&& mkdir -p /var/log/datadog \
&& TRACER_VERSION=$(curl -s https://api.github.com/repos/DataDog/dd-trace-dotnet/releases/latest | grep tag_name | cut -d '"' -f 4 | cut -c2-) \
&& curl -LO https://github.com/DataDog/dd-trace-dotnet/releases/download/v${TRACER_VERSION}/datadog-dotnet-apm_${TRACER_VERSION}_amd64.deb \
&& dpkg -i ./datadog-dotnet-apm_${TRACER_VERSION}_amd64.deb \
&& rm ./datadog-dotnet-apm_${TRACER_VERSION}_amd64.deb
- Tracer version: 2.4.4.0
- OS: Ubuntu docker container running on an AL2 EC2 instance
- CLR: 2.4.4.0 with .NET 6.x
I believe I may be seeing something similar with setting servicename via code, but in my case I am setting it only via code. I am not setting service name via environment variables. I am getting the auto determined servicename showing up in the APM view https://app.datadoghq.com/apm/home where my apps have three services displayed servicename, servicename-http-client, servicename-postgres.
Tracer version: 2.6.0 OS: Alpine docker container running on an AL2 EC2 instance CLR: 2.6.0 with .NET 6.x
@kmcc049 I believe that's expected. Even though I'm setting all values with environment variables, I see multiple services in APM, one for the service itself, one for http client calls, and then one for each dependency, e.g. database or AWS service. I don't think this is related to my issue, but I find it interesting you are able to set these values in code.
Looking back over my initial report again, I noticed that the first log line suggested the tracer version was 2.6.0, but the second shows 2.4.4. Because the code that a teammate wrote for pulling the tracer for the docker container isn't version locked, but dependency configuration in the .NET package is, there seems to have been a drift between the 2.
After updating my application to use 2.6.0 in code, things seem to be working as expected. Will leave this open for someone to comment why this doesn't generate an error and silently fails, but I think my immediate issue is resolved.
@turacma hmm you're right, I guess my mental model of a service doesn't match. Thanks for the reply, I poked around a bit and found that using SetServiceNameMappings on a settings object lets me force them to live as one service.
Hi @turacma, thanks for raising the issue, and glad you managed to resolve it! Unfortunately there are some complexities around this, but the tl;dr; is that this should work.
When setting configuration in code I can see 2 distinct configure lines in the tracer debug log, with the second containing the field values set in code
This is the expected behaviour. We need to hook in very early into the application startup process, before any user code has started. At this point we create the singleton Tracer instance using the current environment variables and datadog.yaml etc, and perform all the initialization logic required. We also write the DATADOG TRACER CONFIGURATION log message that you're seeing.
When you configure the tracer in code and call Tracer.Configure(), this disposes those instances, and rebuilds everything with the new settings. That's why you see the second DATADOG TRACER CONFIGURATION log message. This part at least, is expected.
Looking back over my initial report again, I noticed that the first log line suggested the tracer version was 2.6.0, but the second shows 2.4.4. Because the code that a teammate wrote for pulling the tracer for the docker container isn't version locked, but dependency configuration in the .NET package is, there seems to have been a drift between the 2.
Well spotted! This is a problem we've been wrestling with for some time under the name "mismatched tracer versions". It occurs, as you've pointed out, when you're referencing the tracer using the NuGet package and installing a different version of the tracer. .NET treats assemblies of different versions as fundamentally different entities, so the Tracer used for "manual" tracing is a disconnected from the Tracer used for "automatic" tracing. So calling the static Tracer.Configure() method doesn't affect the automatic tracing Tracer instance, which causes the problem you saw.
Overall, this is a complex issue which we've mostly worked around in version 2.x of the tracer. In version 1.x the traces from your automatic and manual spans would be completely disconnected, and there was a risk of crashing. In 2.x, you won't crash, and the spans are connected, but they're still fundamentally separate.
Unfortunately, there's not much we can do around this, other than ask you to keep the versions in-sync. Given you're deploying using Docker files, have you considered using the Datadog.Monitoring.Distribution NuGet package? This way you only ever deal with one version number, so things can't get out of sync, so you won't see this. You can find the docs for it here: https://docs.datadoghq.com/tracing/setup_overview/setup/dotnet-core/?tab=nuget, but the tl;dr; for that is
- Reference Datadog.Monitoring.Distribution instead of Datadog.Trace
- Update your
CORECLR_PROFILER_PATHetc env vars in your Dockerfile as described in the docs - Remove the installation of the
.debfile from your Dockerfile.
Hope that makes sense!
@kmcc049, with regards to your situation, out-going spans are currently treated as separate services for various reasons, especially related to metrics etc. This is also raised in the issue here. There are ongoing discussions around how best to address this, as we're aware it's not ideal.
I poked around a bit and found that using SetServiceNameMappings on a settings object lets me force them to live as one service.
Note that this has additional implications, as it can mess up the metrics for your services, so it's not something we recommend.
It looks like questions were answered and there had been no activity in a few months, so I will close this issue.