rcanavan
rcanavan
I have obviously overooked the note _Each of these interfaces can, of course, add more event handlers in addition to the ones listed below_ when I had taken the list...
The production servers will get APCu 5.1.8 next week, our test servers have it already. Have any deadlock bugs been fixed between 5.1.7 and 5.1.8?
OK, so there's at least some chance that the root cause may be fixed. I also neglected to grab a core dump of the hung processes, but I'm not sure...
We've encountered another of those deadlocks, this time with apcu 5.1.8. Same backtrace and zbacktrace. Any Idea about this: do you have any Idea how I can identify which process...
APCu 5.1.9, with PHP 7.1.13, deadlock on 3 servers approximately at the same time, all processes (on one server) have essentially the same backtrace, with identical key=key@entry=... but slightly different...
No messages, aside from one `seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning ...` each second, presumably after the deadlock occurred, despite `error_reporting = E_ALL`
There's not even a single call to apcu_entry() in out entire codebase. Unlike #246, our problem is very rare (i.e. once every other month or so, with ~10 servers processing...
Is it possible to identify the process that is holding the lock (as opposed to waiting for it) with futexes, like it is via /proc/locks for flock() locks?
I'll try 5.1.11 as soon as possible. However, can you fix the headline of the release on https://github.com/krakjoe/apcu/releases ? It still says 5.1.10.
> What was the process ```53650``` doing? As you can see, it's the blocking one: > ```futex(0x7f7d4ee8b094, FUTEX_WAIT, 53650, NULL)``` Is that actually the PID? I'm getting ```184``` with all...