Segfaulting without tracing enabled.
After setting up opencensus-php for a project i noticed segfaults of a specific url.
After a bit of debugging it's seems to be partially related to call depth. I was able to reproduce this with a simple example: https://github.com/foosinn/opencensus-segfault
This is throwing an segfault:
docker build -f Dockerfile-broken -t test . && docker run -it -v $PWD/script.php:/script.php test
While this is not:
docker build -f Dockerfile-working -t test . && docker run -it -v $PWD/script.php:/script.php test
The only difference is installing the opencensus PHP extension. The script.php doesn't use tracing at all, the only difference is installing the module.
Sidenote: In the original code it also made a difference weather i specified a third parameter for the results to preg_match. I wasn't able to create a simple case for that, but i guess the issue is in both cases the same.
Thanks for your help!
At which values of MAX do the segfaults start? The Opencensus extension intercepts every PHP method call; I could imagine that that always adds a bit of overhead to the stack, either in the form of extra on-stack allocations and probably also additional function calls (e.g. due to the "replaced" method call function calling through to the "original" one).
Its hitting at about 32000 (it's always changing a bit, running the script displays it in 1000 iteration steps). Note that this runs only with the extension enabled, but no tracing at all configured.
The affected code seems to need a lot less recursions, but of course there are more variables and other things happening which would also fill up the stack i think.
I guess even if there is more memory used, it should never segfault? Also are there ways to increase the available memory?
Note that this runs only with the extension enabled, but no tracing at all configured.
That's to be expected, as with the current way the extension is implemented, every single method call needs to be intercepted, regardless whether it is traced or not.
I guess even if there is more memory used, it should never segfault? Also are there ways to increase the available memory?
The segmentation fault is probably not due to running out of heap memory, but rather out of stack memory, which is much more limited (~512kB by default on some OSes I think, but not 100% sure). Changing the stack size might be possible on an OS limit, see this very old article on a similar problem which uses ulimit for that.
Again, I am not an expert on this, it's just my guess.
Thanks so much, i didn't realize that this was a operating system limit.
For the record, adjusting the httpd systemd service fixes this:
# /etc/systemd/system/httpd.service.d/override.conf
[Service]
LimitSTACK=16777216
Should this be somehow documented if others run into this?