The maxPendingAddRequestsPerThread configuration is inconsistent with the actual behavior, resulting in netty direct memory OOM
BUG REPORT
Describe the bug
The maxPendingAddRequestsPerThread configuration in bookkeeper.conf is inconsistent with the actual behavior, resulting in netty direct memory OOM.
To Reproduce
Steps to reproduce the behavior:
- set config: maxPendingAddRequestsPerThread=1000
- simulate rocksdb failure, flush is blocked for a long time
- addEntry requests are queued in the thread pool, and the size of queues is twice the maxPendingAddRequestsPerThread configuration
- the size of queued requests doubled, resulting in netty direct memory OOM
Expected behavior
If we configure maxPendingAddRequestsPerThread to 1000, the maximum number of thread pool queues should not exceed 1000, otherwise it will cause unexpected memory usage and may cause OOM.
By reading the source code, I found that the root cause of the problem is that there is a localTasks logic in SingleThreadExecutor. Before each execution, it tries to drain all tasks from the thread pool queue to localTasks. At this time, the thread pool queue is equivalent to being cleared, but the tasks in localTasks have not yet been executed. In extreme cases, the number of tasks queued by each SingleThreadExecutor will double. I guess the purpose of using localTasks is to avoid lock contention and improve performance, which is fine. But perhaps we should consider solving the problem of doubling the number of queued tasks.
Screenshots
Additional context
Add any other context about the problem here.
@merlimat Hi, please pay attention to this issue, thanks!
It seems that we need to add an atomic integer to record the working task.
I submitted a PR to fix this issue: https://github.com/apache/bookkeeper/pull/4488