dd-trace-php
dd-trace-php copied to clipboard
[Bug]: Cannot edit span->name and span->service in trace_method anymore
Bug report
Hello,
For a few weeks now (maybe a couple months?) we are not able to edit the span's name and span's service inside the \DDTrace\trace_method function.
We did not edit the code, it stopped working suddenly.
\DDTrace\trace_method(
'App\Exceptions\Handler',
'addErrorToDatadogTraceSpan',
function (DDTrace\SpanData $span, $args, $ret, $exception): void {
// Ok, all exception thrown into the addErrorToDatadogTraceSpan function will be logged as a trace error
$span->name = 'error.handler';
$e = $args[0] ?? null;
if ($e instanceof Throwable) {
$span->resource = $e->getMessage();
} elseif (is_string($e)) {
$span->resource = $e;
}
$sendToErrorTracking = $args[1] ?? false;
if ($sendToErrorTracking) {
$span->service = 'error.handler';
}
}
);
Debugging this snippet, I can tell it's still run (especially the $span->service = 'error.handler'; line), but errors listed in the error tracking page do not hold these data anymore.
We upgraded the dd-trace extension to the latest version (1.9.0) without any luck.
It is really a pain because we have a monitor filtering on span.name:error.handler connected to Slack, alerting us in case of a critical error.
Is there a way to get back this behavior?
PHP version
8.4.7
Tracer or profiler version
1.9.0
Installed extensions
[PHP Modules] apcu bcmath Core ctype curl datadog-profiling date ddappsec ddtrace dom exif fileinfo filter gd hash iconv intl json libxml mbstring mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix random readline redis Reflection session SimpleXML sodium SPL sqlite3 standard tokenizer xml xmlreader xmlwriter Zend OPcache zip zlib
[Zend Modules] Zend OPcache datadog-profiling ddappsec ddtrace
Output of phpinfo()
{
"date": "2025-05-20T08:41:09Z",
"os_name": "Linux ip-10-10-68-124.eu-west-3.compute.internal 5.10.235-227.919.amzn2.x86_64 #1 SMP Sat Apr 5 16:59:05 UTC 2025 x86_64",
"os_version": "5.10.235-227.919.amzn2.x86_64",
"version": "1.9.0",
"lang": "php",
"lang_version": "8.4.1",
"env": "preprod",
"enabled": true,
"service": "web",
"enabled_cli": true,
"agent_url": "http:\/\/localhost:8126",
"debug": false,
"analytics_enabled": false,
"sample_rate": -1,
"sampling_rules": [],
"tags": [],
"service_mapping": [],
"distributed_tracing_enabled": true,
"dd_version": "2025-05-16-10-24-40-26-gec40bcf",
"architecture": "x86_64",
"instrumentation_telemetry_enabled": true,
"sapi": "fpm-fcgi",
"datadog.trace.sources_path": "\/opt\/datadog\/dd-library\/1.9.0\/dd-trace-sources\/src",
"open_basedir_configured": false,
"uri_fragment_regex": null,
"uri_mapping_incoming": null,
"uri_mapping_outgoing": null,
"auto_flush_enabled": false,
"generate_root_span": true,
"http_client_split_by_domain": false,
"measure_compile_time": true,
"report_hostname_on_root_span": false,
"traced_internal_functions": null,
"enabled_from_env": true,
"opcache.file_cache": null,
"sidecar_trace_sender": true
}
Upgrading from
No response
Hi @devantoine,
Thanks for the report! That does seem confusing, and we appreciate you digging into it.
From what you described, it sounds like the $span->name and $span->service assignments do run — but those values aren't showing up in the Error Tracking product, or possibly in the Slack alert you're using. Just to clarify and help us narrow things down:
- Are the custom span name and service values visible in the trace view in APM? If you click into a trace that should have been affected, can you confirm whether the span name is
error.handlerand whether the resource and service values match what you set? - Is this issue only affecting Error Tracking or also the main trace view?
- Have you recently enabled or changed settings for Error Tracking, or started using a newer version of it?
- Would you be able to share:
- A trace that includes this span?
- A screenshot of what you expect to see in Error Tracking vs. what you do see?
Once we know whether the data is being set but not surfaced — or not set at all — we can get much closer to a fix or workaround.
Thanks for your reply @PROFeNoM!
To answer your questions, I'm specifically talking about the error tracking. You can ignore Slack, it was just to add more context to why it's bothering us.
I check in the trace view in APM and the span service does appear. But something is strange there, because I first checked this morning after seeing your issue, and the span service was appearing right below the "Overview" tab, like shown in the screen (ignore the top arrow):
Now, I don't see it anymore...
But, in the facets sidebar, under the "Service" and the "Operation name" section, I do see the "error.handler" service.
Now, in the error tracking, filtering by operation_name:error.handler does list some results, for the last 24 hours. I cannot explain what's happening. I don't think this weird behavior comes from our side.
Anyway, what's listed under that filter in the error tracking does not match what's listed in the APM trace view with the same filter.
As you can see, the APM trace view lists 3 SQLSTATE errors, which don't appear in the error tracking:
Hey @devantoine,
I haven't tested, but in your manual trace_method, do you have the expected behavior if you add the tag track_error: true? Theoretically, this should force Error Tracking to track whatever exception caught by addErrorToDatadogTraceSpan. This is a quick try 🤞
$span->meta['track.error'] = 'true';
Hello @PROFeNoM
Sorry for the late reply, I needed to have some time to test the track.error meta.
Unfortunately, it does not fix my issue.
Errors listed in the error tracking don't show our "error.handler" service's name nor the span's name.
But, when opening the same error in the APM view, I do see it.
After reconsidering this, I think the Datadog extension using DDTrace\trace_method is functioning correctly. However, there seems to be an issue with error tracking or a recent change that's making span usage unreliable.
@PROFeNoM, should we go ahead and close this issue?