[BUG] Potential incompatibility with ionCube Extension
Bug description
We are using the Unirgy Rapidflow module within a Magento 2 instance. This module is partially ionCube encoded. We've found that if the ddtrace extension is enabled, this module can cause segmentation faults in PHP when performing some operations. Disabling the ddtrace extension or setting ddtrace.enable to Off in PHP INI resolves the issues with the Rapidflow module.
The initiation of this modules functionality is done through a web request which is handled by Apache and passed via mod_proxy_fcgi to PHP-FPM. Testing was done using a Docker container running locally on my development machine. We run similar setups in our production environment which is experiencing the exact same issue.
More than willing to do debugging on my end given a bit of direction on what would be the most helpful. Given this is a local environment with a custom container image, I can install whatever tools necessary without any constraints.
Note that we were aware of existing incompatibility with ionCube prior to version 0.71.0. We used the SourceGuardian extension with a SourceGuardian encoded version of this module to work around that problem. This worked until a recent update to the module where we started seeing segmentation faults. At that time, we updated to ddtrace 0.71.0 and the ionCube encoded version of the module along with the latest ionCube loader 11.0.1. This seemed to be fully working, but we started to find these segmentation faults while running some profiles even though other similar profiles are working without issue.
Also note that because this is a local instance, I do not have the agent setup. This does not impact the ability to reproduce this issue so I did not take the time to get that setup.
PHP version
PHP 7.4.28 (cli) (built: Mar 29 2022 03:39:25) ( NTS ) Copyright (c) The PHP Group Zend Engine v3.4.0, Copyright (c) Zend Technologies with the ionCube PHP Loader + ionCube24 v11.0.1, Copyright (c) 2002-2022, by ionCube Ltd. with Zend OPcache v7.4.28, Copyright (c), by Zend Technologies with Xdebug v3.1.4, Copyright (c) 2002-2022, by Derick Rethans with ddtrace v0.72.0, Copyright Datadog, by Datadog
Tracer version
0.71.0, 0.72.0
Installed extensions
[PHP Modules] amqp apcu bcmath bz2 calendar Core ctype curl date ddtrace dom exif fileinfo filter ftp gd geoip gettext hash iconv imagick imap intl ionCube Loader json ldap libxml mbstring memcached mongodb mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix readline redis Reflection session shmop SimpleXML soap sockets sodium SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer vips xdebug xml xmlreader xmlrpc xmlwriter xsl Zend OPcache zip zlib
[Zend Modules] Xdebug Zend OPcache ddtrace the ionCube PHP Loader + ionCube24
OS info
PRETTY_NAME="Debian GNU/Linux 10 (buster)" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster
Diagnostics and configuration
Output of phpinfo() (ddtrace >= 0.47.0)

Hey there,
we've done extensive testing with ioncube - that's why we un-blacklisted it. A helpful first step would be having a core dump: https://docs.datadoghq.com/tracing/setup_overview/setup/php/?tab=containers#troubleshooting-an-application-crash.
Even more awesome would be a reproducer, if you have one.
Thanks!
Thank you for the link to the documentation, I'll look at getting a core dump.
I do want to clarify that I did not mean to imply absolute confidence that this is an issue specifically between ionCube and ddtrace extensions. The code is not visible so this could very well be an issue with that code triggering an issue. At this point all I know is that the crashing occurs when ddtrace is enabled so I started here. I'm hoping that the debugging will help illuminate where it's crashing and that will lead to a next step.
Creating a small example to reproduce it that I can share would be difficult given the encoding aspect and the amount of data and code dependencies involved. That data and dependencies could be core to reproducing the issue as well so it may prove difficult to reproduce in a clean way.
@bwoebi
So I seem to have hit a bit of a roadblock and not sure how to proceed. The pre-built loader extensions from ionCube will not load into a debug build of PHP. I get the following message:
Cannot load the ionCube PHP Loader - it was built with configuration API320190902,NTS, whereas running engine is API320190902,NTS,debug
Was the Datadog team able to get loaders built for debug versions of PHP to diagnose the conflicts or did the ionCube team make the necessary fixes on their end such that it was no longer necessary to blacklist their module?
It's looking like I may have to go down the route of reaching out to the module provider to send a decoded version for us to test with and/or to ionCube for a debug build of their loader extension.
Hey @nhughart Yes, there's apprarently no way to get hold of debug builds for ionCube Loader.
However, it should be perfectly fine to use a regular non-debug build, the important part is that you have debug symbols (well, for ioncube you won't have any, but for everything else you or the distro will) in order to have names and not only ??? in the backtrace.
@bwoebi Is there a way for me to force PHP to load the non-debug extension into a debug build of PHP or is there some way to get debug symbols in PHP without it reporting as a debug build? Right now it is refusing to load the extension and I haven't found any information that indicates it's possible to force it to load.
No, it is impossible to mix debug and non-debug builds of PHP (extensions). There are subtle differences between them, making them binary incompatible.
@nhughart Sorry, just realized I missed your debug symbols question. Typically your distribution has them available: (follow https://docs.datadoghq.com/tracing/setup_overview/setup/php/?tab=containers#install-debug-symbols).
@bwoebi
Unfortunately we leverage a Docker container which is based on the official PHP images that build PHP from source. I could potentially look to build an image that would allow us to install the debug symbols from a package. Alternatively I can look at how these packages are being built so I can create a similar build of PHP with the symbols separate. I assumed installing these packages would replace the PHP binaries with ones that were built with the debug flag set and that would lead me to the same problem. If that's not the case then I can investigate that avenue.
It does appear it may be possible to build with -g CFLAG without using --enable-debug. I will give this a shot and see if I can get things to load.
Ah, okay. Yes, in case you are compiling PHP yourself configuring PHP with CFLAGS=-g is absolutely the way to go.
So was able to get a bit further and get a backtrace, but not sure it's going to be terribly helpful:
(gdb) bt
#0 0x00007fafba407162 in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#1 0x00007fafba402917 in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#2 0x00007fafba402b00 in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#3 0x00005592a6283b4d in ZEND_DO_FCALL_SPEC_RETVAL_USED_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:1714
#4 0x00005592a62892c7 in ZEND_USER_OPCODE_SPEC_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:2670
#5 0x00005592a635a90a in execute_ex (ex=0x7fafba6174f0) at /usr/src/php/Zend/zend_vm_execute.h:53597
#6 0x00007fafba22bcca in xdebug_execute_ex (execute_data=0x7fafba6174f0) at /tmp/pear/temp/xdebug/src/base/base.c:779
#7 0x00007fafba402a1e in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#8 0x00005592a6283237 in ZEND_DO_FCALL_SPEC_RETVAL_UNUSED_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:1602
#9 0x00005592a62892c7 in ZEND_USER_OPCODE_SPEC_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:2670
#10 0x00005592a635a90a in execute_ex (ex=0x7fafba6164d0) at /usr/src/php/Zend/zend_vm_execute.h:53597
#11 0x00007fafba22bcca in xdebug_execute_ex (execute_data=0x7fafba6164d0) at /tmp/pear/temp/xdebug/src/base/base.c:779
#12 0x00007fafba402a1e in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#13 0x00005592a6283237 in ZEND_DO_FCALL_SPEC_RETVAL_UNUSED_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:1602
#14 0x00005592a62892c7 in ZEND_USER_OPCODE_SPEC_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:2670
#15 0x00005592a635a90a in execute_ex (ex=0x7fafba616340) at /usr/src/php/Zend/zend_vm_execute.h:53597
#16 0x00007fafba22bcca in xdebug_execute_ex (execute_data=0x7fafba616340) at /tmp/pear/temp/xdebug/src/base/base.c:779
#17 0x00007fafba402a1e in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
#18 0x00005592a6283237 in ZEND_DO_FCALL_SPEC_RETVAL_UNUSED_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:1602
#19 0x00005592a62892c7 in ZEND_USER_OPCODE_SPEC_HANDLER () at /usr/src/php/Zend/zend_vm_execute.h:2670
#20 0x00005592a635a90a in execute_ex (ex=0x7fafba615ff0) at /usr/src/php/Zend/zend_vm_execute.h:53597
#21 0x00007fafba22bcca in xdebug_execute_ex (execute_data=0x7fafba615ff0) at /tmp/pear/temp/xdebug/src/base/base.c:779
#22 0x00007fafba402a1e in ?? () from /usr/local/lib/php/extensions/no-debug-non-zts-20190902/ioncube_loader_lin_7.4.so
Seems like it's dying somewhere within the ioncube loader extension. I can disable xdebug and get another dump if that's making it hard to parse the results. I can confirm that this is reproducible without xdebug as well.
Not sure if the core dump will contain any sensitive data so would prefer to share privately. Let me know the best way to get that to you.
Yes, that's sadly not particularly helpful. However we might be able to figure out what's going on given VM state and the compiled assembly.
To share the core dump, can you share it directly with datadog support? And notify me when you did that? Is it also possible to somehow share the docker container at that place? I will need the matching binaries to open the core dump.
Otherwise, if you want to discuss that more privately, feel free to reach out to me on the datadog public slack (https://chat.datadoghq.com/).
@bwoebi I've submitted the support ticket, 797198, with the core dump, a dump of the docker image and the docker compose configuration we use to start it. The environment variables in particular impact how PHP gets configured, but not sure this will matter for reviewing the core dump.
If chatting via Slack will help I can get hop in there as well, just let me know. As someone who debugs problems on the regular I understand the need for as much information as possible so let me know if I can get you anything else in this scenario.
Thanks, I've downloaded the image and successfully opened the core dump. I'll investigate tomorrow morning and come to you if it turns out to be fruitless.
@nhughart Hm, maybe it's the dynamic encryption feature? The opcodes of the Unirgy\RapidFlow\Model\ResourceModel\Catalog\Product\AbstractProduct::_getAttributeSetFields (current executed function) look like it was not or wrongly decrypted. (just speculating)
From what I can tell for sure op_array->reserved[3] && (op_array->line_start & 0x20) evaluates to true on that specific function, which seems to trigger special handling in ioncube.
Also I can tell that ioncube is trying to dereference a value from that functions run_time_cache on slot zero. ddtrace (on PHP 7.4 only ... if you don't need 7.4 features, you could try downgrading to php 7.3?) allocates a slot at startup (turns out to be slot zero as well in the core dump) and caches there whether a function shall be traced or not. This particular implementation detail on the ddtrace side will change in the next month.
I'm not sure what exactly triggers this particular flags. I suppose you have no chance to view the unencrypted RapidFlow code.
@bwoebi Yeah unfortunately I don't have the ability to look at the code in their module. We're going to reach out to Unirgy and/or ionCube to see if we can get any assistance from them on this issue as well.
As for PHP version, we need 7.4 for other portions of the Magento framework we're working with so not an option to downgrade to 7.3 unfortunately.
Do you believe this change to the usage of the runtime cache in the ddtrace module will potentially fix this issue we're running into? Based on my reading of what you said, it sounds like one or both extensions are assuming they can use slot zero. Is this what's leading to the crash or is it potentially just causing additional issues? If this change has potential to fix our issue I'd definitely be interested in testing any early builds if/when possible.
Thank you very much for the time and effort looking into this, it's definitely appreciated.
Yes, it is possible. The future version of the tracer will not make use of the run_time_cache at all. So at least this specific case will be compatible.
It seems to me that ioncube is assuming that it can use run_time_cache slot zero. ddtrace is allocating the slot via the specific API for this use case: zend_get_op_array_extension_handle.
This future version of ddtrace is not multiple months away, only a few weeks. (planned at least :-))
@bwoebi Are the run time cache changes you're referring to here in the 0.74.0 release? I saw the following changelog entry:
- Fix run_time_cache initialization for closure calls with foreign scope on PHP 7.4-8.1
I did test this version locally and ran into the same crashes we saw before. This changelog doesn't sound like the run_time_cache was fully removed though so guessing that work is still pending.
@nhughart No this is unrelated. This is something specific when calling traced function callbacks.
The changes for this are ready on the https://github.com/DataDog/dd-trace-php/tree/bob/interceptor branch, but they are not reviewed yet (it's a ten thousand lines diff ...). If you are curious you can try to compile it yourself - it should work. Otherwise, the changes are scheduled to be merged within the next weeks, and then released afterwards.
If you want to alpha-test this, but don't want to compile it yourself, I can provide you with a shared object, if you contact me directly on the datadog public slack.
@bwoebi
Thanks for the update. We can likely wait for the official builds to come around. We'll keep an eye out for the updates.
@nhughart I'm noticing this issue is still open, it's been merged a month ago - have you had a chance to retry with 0.76.2 or newer?
Since we haven't had any feedback and seem to be sure the fix was merged in the interim, I'm closing this issue.
If I'm wrong to close it and the problem persists with the latest version of the tracer, please re-open this issue with an update.