apisix
apisix copied to clipboard
The batch-processor has a bug
Issue description
batch-processor.lua:107: create_buffer_timer(): failed to create buffer timer: too many pending timers while logging request
Environment
- apisix version (cmd:
apisix version): 2.4 - OS (cmd:
uname -a): centos7.4 - OpenResty / Nginx version (cmd:
nginx -Voropenresty -V): openresty/1.19.3.1 - etcd version, if have (cmd: run
curl http://127.0.0.1:9090/v1/server_infoto get the info from server-info API):v3.4.16 - apisix-dashboard version, if have: 2.5
- luarocks version, if the issue is about installation (cmd:
luarocks --version):/bin/luarocks 2.3.0
Steps to reproduce
Batch-processor is used to process logs. 30 logs are aggregated at a time, with a total length of 10,000 bytes and about QPS800
Actual result
Do not report logs
Error log
batch-processor.lua:107: create_buffer_timer(): failed to create buffer timer: too many pending timers while logging request
Expected result
Log reporting is normal.
Maybe you can increase the default timers limitation with https://github.com/apache/apisix/pull/4826
If QPS is greater than 50000, how much max_running_timers max_pending_timers should be set to? Will too many Settings affect apisix performance?
lua_max_pending_timers and lua_max_running_timers for per nginx woker or the whole ngnx workers?
lua_max_pending_timers and lua_max_running_timers for per nginx woker or the whole ngnx workers?
It's for the individual.
If QPS is greater than 50000, how much max_running_timers max_pending_timers should be set to? Will too many Settings affect apisix performance?
Most of the timers aren't relative to the QPS, you can try with a large number. A huge number of timers limitation doesn't affect the performance. The timers limitations are just boundaries. There won't be any memory preallocation.
If QPS is greater than 50000, how much max_running_timers max_pending_timers should be set to? Will too many Settings affect apisix performance?
Most of the timers aren't relative to the QPS, you can try with a large number. A huge number of timers limitation doesn't affect the performance. The timers limitations are just boundaries. There won't be any memory preallocation.
If the timer is not released, it will cause a memory leak
If QPS is greater than 50000, how much max_running_timers max_pending_timers should be set to? Will too many Settings affect apisix performance?
Most of the timers aren't relative to the QPS, you can try with a large number. A huge number of timers limitation doesn't affect the performance. The timers limitations are just boundaries. There won't be any memory preallocation.
If the timer is not released, it will cause a memory leak
Use a high number of timers doesn't mean there is a timer leak. It is quite normal to increase the limitation of connections, file descriptors and something else when you deploy a service.
If you think there is a real leak, please follow the instrument in https://github.com/apache/apisix/issues/4461#issuecomment-865780706. Keep running APISIX for a long time, see if the number of timers can reach an insane number like 100,000.
If QPS is greater than 50000, how much max_running_timers max_pending_timers should be set to? Will too many Settings affect apisix performance?
Most of the timers aren't relative to the QPS, you can try with a large number. A huge number of timers limitation doesn't affect the performance. The timers limitations are just boundaries. There won't be any memory preallocation.
If the timer is not released, it will cause a memory leak
Use a high number of timers doesn't mean there is a timer leak. It is quite normal to increase the limitation of connections, file descriptors and something else when you deploy a service.
We can see from our monitoring that apisix has a memory leak. We fixed the timer continuously growing, let's see what happens first
This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions.
This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.