kopf
kopf copied to clipboard
RuntimeError: can't start new thread (0.23.2)
Long story short
operator pod gets killed due to hitting thread limit reached as process keeps on spawning new threads and in loads condition pod gets killed in every 2 Hrs
Description
RuntimeError: can't start new thread
Traceback (most recent call last):
File "/usr/local/bin/kopf", line 8, in
root@s/# ps huH p 8 | wc -l 624 root@s:/# ps -o nlwp 8 NLWP 624 root@s:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 08:46 ? 00:00:00 /bin/sh -c kopf run --standalone /handlers.py root 8 1 0 08:46 ? 00:00:13 /usr/local/bin/python /usr/local/bin/kopf run --standalone /handlers.py root 634 0 0 11:00 pts/0 00:00:00 bash root 649 634 0 11:01 pts/0 00:00:00 ps -ef root@serviceendpoint-6b69949674-tvbj6:/# cat /proc/sys/kernel/threads-max 6180721 root@s:/# ps -o nlwp 8 NLWP 631 root@s:/# ps -o nlwp 8 NLWP 652
import kopf
The exact command to reproduce the issue
kopf run ...
The full output of the command that failed
Environment
- Kopf version: 0.23.2
- Kubernetes version: 1.15.3
- Python version: 3.7
- OS/platform: ubuntu 16.04
Python packages installed
Hello. Thanks for reporting.
Can you please clarify the version? Are you sure it is version 0.23.2?
I see this line:
File "/usr/local/lib/python3.7/site-packages/kopf/clients/watching.py", line 62, in streaming_aiter
yield await loop.run_in_executor(executor, streaming_next, src)
This sync approach was removed in #227 (line link), which was released as 0.23 (followed by 0.23.1 & 0.23.2). — And it was replaced with aiohttp-based cycles, which is natively async — and remains so since 0.23+.
One idea how this happens:
Kopf uses a threaded executor of asyncio for synchronous handlers (declared with def
, not async def
).
Python's threaded executor adds one more thread on every use until max_workers is reached (link) — not all threads at once, but one-by-one.
The default max_workers is os.cpu_count() * 5
(link). So, on a MacBook, it can be 8 * 5 = 40 (perhaps, due to hyperthreading CPUs). On huge K8s nodes, it can be up to 40 * 5 = 200 threads (assuming 40 regular cores), or 2 * 40 * 5 = 400 (40 hyper-threading cores), or more and more.
It keeps the threaded workers running even if they are not used. And it does not reuse the idle workers for the next tasks (or maybe it does, but still adds extra idling workers until the limit is reached).
Maybe, at some level, it reaches the RAM limits of the pod and dies.
This can be controlled by using an already existing but undocumented config (link):
import kopf
kopf.config.WorkersConfig.set_synchronous_tasks_threadpool_limit(100)
I was able to catch this on one of our operators which is supposed to do nothing during that time (but contained synchronous handlers for @kopf.on.event
of pods) — and the thread count was growing overnight when it should be flat. I will try this trick above and see if it helps in the next few nights.
It did help. The thread count does not grow once the operator is started and used: it stays at 12 (10 were configured for the executor, 1 for the main thread, 1 for something Python's perhaps).
So far, the issue is a matter of asyncio's thread executor setup — can be done on the operator level as needed. I suggest that we have no own defaults or assumptions about the execution environment in the framework.
Tested with Kopf 0.25.
@amolkavitkar Can you please check if this solutions helps you? (Also, please check the version.)