lua-resty-limit-traffic
lua-resty-limit-traffic copied to clipboard
limit_req is not as expected in openresty/1.15.8.2
hello @agentzh , I have a question to disturb you。
I found some problems use limit_req, I set a limit of 500, but the normal number of requests is 3000.
or version: openresty/1.15.8.2
nginx.conf: lua_shared_dict limit_req_store 1m;
limit_config: { "conn_limit": -1, "req_total_limit": 500, "req_host_limit": -1, "org_host_limit": -1, "limit_rate_after": 0, "limit_rate": -1, "exception": {} }
lua script: ` _M.req_limit_process = function(host) local conf = _M.conf
if not conf.req_total_limit or conf.req_total_limit < 0 then
return
end
if conf.req_total_limit and conf.req_total_limit >= 0 then
ngx_logger(NGX_ERR, conf.req_total_limit)
local limiter, err = limit_req.new(limit_req_store, conf.req_total_limit, 0)
if not limiter then
ngx_logger(NGX_ERR, "create req total limiter failed, err: ", err)
return bdapp.exit(500, errno.UNKNOW_ERROR)
end
local key = "total_request"
local delay, err = limiter:incoming(key, true)
if not delay then
ngx_logger(NGX_WARN, "req limit")
return bdapp.exit(429, errno.REQ_TOTAL_LIMT_REJECT)
end
-- no delay in req_total_limit
end
-- req_host_limit被设置为-1, 可以忽略下列逻辑
local host_limit = conf.req_host_limit
local elem = conf.exception[host]
if elem then
host_limit = elem.req_limit
end
if host_limit >= 0 then
local burst = 50
if host_limit < 100 then
burst = 0
end
local limiter, err = limit_req.new(limit_req_store, host_limit, burst)
if not limiter then
return bdapp.exit(500, errno.UNKNOW_ERROR)
end
local delay, err = limiter:incoming(host, true)
if not delay then
return bdapp.exit(429, errno.REQ_HOST_LIMIT_REJECT)
end
if delay >= 0.001 then
ngx.sleep(delay)
end
end
end `
testing and result: ` wrk -c 100 -d 100 -t 4 http://127.0.0.1/1.txt -H "Host: www.ortest.com"
Running 2m test @ http://127.0.0.1/1.txt 4 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 14.82ms 18.13ms 344.20ms 91.07% Req/Sec 2.15k 546.94 5.88k 73.02% 853821 requests in 1.67m, 259.09MB read Non-2xx or 3xx responses: 502925 Requests/sec: 8530.88 Transfer/sec: 2.59MB
200 status qps is (853821 - 502925) / 100 ~= 3500 `
After reading the source code of limit_req,I found the following problems.
-
incorming func cannot guarantee that the last time obtained from shared memory is the time of the last request
-
ngx.time may not be uniform between multiple processes, and there is no guarantee that last_mtime is increasing
-
ngx.now can only be accurate to milliseconds, i think if rate > 1000, limit_req limits will become very inaccurate.
First I used lock to solve the first problem, testing and result: set limit 500, 200 qps is 1500.
Second I called ngx.update_time before performing incorming to further shorten this error, testing and result: set limit 500, 200 qps is 466。
Last, I set limit 2000, result: 200 qps is 970, no matter how high qps is set thereafter, 200 qps will not exceed 1000。 (1W qps can be reached without restrictions)
I am dazzled about your code snippets. Why don't you delay or uncommit() requests after first calling limiter:incoming(key, true)
? I think maybe the problem is here. You fire an incoming evnt and calculates the delay, but you did nothing after that. I think this may be the reason that makes the time disordered.
By the way, I tried this limit_req conf below. The request limit seems work the way it should be:
(modules_limit_req
is my encapsulation of the resty.limit.req
module)
local rate, burst = 1000, 500
local lim, err = modules_limit_req:new({
shdict_name = "my_limit_req_store",
rate = rate,
burst = burst,
key = ngx.var.http_host
})
if not lim then
ngx.log(ngx.ERR,
"failed to instantiate a resty.limit.req object: ", err)
return ngx.exit(500)
end
local delay, err = lim.limiter:incoming(lim.key, true)
if not delay then
if err == "rejected" then
ngx.log(ngx.ERR, "request exceeds the rate + burst limit: ", err)
return ngx.exit(429)
end
ngx.log(ngx.ERR, "failed to limit req: ", err)
return ngx.exit(500)
end
if delay >= 0.001 then
-- the 2nd return value holds the number of excess requests
-- per second for the specified key. for example, number 31
-- means the current request rate is at 231 req/sec for the
-- specified key.
ngx.log(ngx.NOTICE, "limit_req delay: ", delay)
local excess = err
-- the request exceeding the 200 req/sec but below 300 req/sec,
-- so we intentionally delay it here a bit to conform to the
-- 200 req/sec rate.
ngx.sleep(delay)
end
The result of the awk benchmarking tool shows the limit works (The QPS is around 1000):
4 threads and 12 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 11.95ms 1.11ms 23.16ms 73.14%
Req/Sec 250.83 9.30 272.00 71.25%
10000 requests in 10.01s, 7.92MB read
Requests/sec: 999.15
Transfer/sec: 809.81KB
Running 10s test @ https://172.29.4.58/
8 threads and 20 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.00ms 4.29ms 215.72ms 97.03%
Req/Sec 125.78 11.05 240.00 88.43%
10080 requests in 10.10s, 7.98MB read
Requests/sec: 998.00
Transfer/sec: 808.88KB
And log shows delays:
2020/03/19 15:31:13 [notice] 13661#0: *391740 [lua] access_control.lua:195: main(): limit_req delay: 0.003,
2020/03/19 15:31:13 [notice] 13661#0: *391731 [lua] access_control.lua:195: main(): limit_req delay: 0.003,
2020/03/19 15:31:13 [notice] 13661#0: *391728 [lua] access_control.lua:195: main(): limit_req delay: 0.004,
2020/03/19 15:31:13 [notice] 13663#0: *391737 [lua] access_control.lua:195: main(): limit_req delay: 0.003,