lua-resty-limit-traffic icon indicating copy to clipboard operation
lua-resty-limit-traffic copied to clipboard

concurrent connections counting

Open junos opened this issue 6 years ago • 4 comments

Here are steps to re-produce the issue I have:

  1. I install OpenRestry version 1.11.2.5 on an EC2 instance with <EC2_IP>
  2. Based on your one of your example here https://github.com/openresty/lua-resty-limit-traffic#synopsis, I made very little changese, please see following for my actual config.
  3. In order to test concurrent connections counting, I install npm package artillery at my local machine, https://www.npmjs.com/package/artillery by npm install -g artillery
  4. then I test concurrent count by execute command at my local machine: `artillery quick --count 5 -n 2 http://<EC2_IP>/
  5. I do step #4 and wait for 10 seconds(or even longer, it looks like it doesn't matter) and repeat steup #4

I expect concurrent connections counting return to 1 after I stop make concurrent request, but that counting number keep increasing without reset

My question is: Did I do something wrong or this is an issue for lua-resty-limit-traffic

This is my config:

# demonstrate the usage of the resty.limit.conn module (alone!)

lua_shared_dict my_limit_conn_store 100m;

server {
    location / {
        access_by_lua_block {
            -- well, we could put the require() and new() calls in our own Lua
            -- modules to save overhead. here we put them below just for
            -- convenience.

            local limit_conn = require "resty.limit.conn"

            -- limit the requests under 200 concurrent requests (normally just
            -- incoming connections unless protocols like SPDY is used) with
            -- a burst of 100 extra concurrent requests, that is, we delay
            -- requests under 300 concurrent connections and above 200
            -- connections, and reject any new requests exceeding 300
            -- connections.
            -- also, we assume a default request time of 0.5 sec, which can be
            -- dynamically adjusted by the leaving() call in log_by_lua below.
            local lim, err = limit_conn.new("my_limit_conn_store", 200, 100, 0.5)
            if not lim then
                ngx.log(ngx.ERR,
                        "failed to instantiate a resty.limit.conn object: ", err)
                return ngx.exit(500)
            end

            -- the following call must be per-request.
            -- here we use the remote (IP) address as the limiting key
            local key = ngx.var.binary_remote_addr
            local delay, err = lim:incoming(key, true)
            if not delay then
                if err == "rejected" then
                    return ngx.exit(503)
                end
                ngx.log(ngx.ERR, "failed to limit req: ", err)
                return ngx.exit(500)
            end

            if lim:is_committed() then
                local ctx = ngx.ctx
                ctx.limit_conn = lim
                ctx.limit_conn_key = key
                ctx.limit_conn_delay = delay
            end

            -- the 2nd return value holds the current concurrency level
            -- for the specified key.
            local conn = err

            if delay >= 0.001 then
                -- the request exceeding the 200 connections ratio but below
                -- 300 connections, so
                -- we intentionally delay it here a bit to conform to the
                -- 200 connection limit.
                -- ngx.log(ngx.WARN, "delaying")
                ngx.sleep(delay)
            end
        }

        # content handler goes here. if it is content_by_lua, then you can
        # merge the Lua code above in access_by_lua into your

        log_by_lua_block {
            local ctx = ngx.ctx
            local lim = ctx.limit_conn
	    if lim then
                -- if you are using an upstream module in the content phase,
                -- then you probably want to use $upstream_response_time
                -- instead of ($request_time - ctx.limit_conn_delay) below.
                local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
                local key = ctx.limit_conn_key
                assert(key)
                local conn, err = lim:leaving(key, latency)
                if not conn then
                    ngx.log(ngx.ERR,
                            "failed to record the connection leaving ",
                            "request: ", err)
                    return
                end
            end
            -- My code to check concurrent connections counting
	    if ngx.ctx then
                ngx.log(ngx.ERR, 'concurrent connections =', ngx.shared.my_limit_conn_store:get(ngx.ctx.limit_conn_key))
	    end
        }
    }
}

Here is partial of the result: ............ 2017/10/31 20:28:52 [error] 589#0: *25 [lua] log_by_lua(default:86):20: concurrent connections =49 while logging request, client: 10.1.254.13, server: , request: "GET / HTTP/1.1", host: "10.1.17.130" 2017/10/31 20:28:52 [error] 589#0: *25 [lua] log_by_lua(default:86):20: concurrent connections =50 while logging request, client: 10.1.254.13, server: , request: "GET / HTTP/1.1", host: "10.1.17.130" .............

junos avatar Oct 31 '17 20:10 junos

@junos Your example is not self-contained nor minimal so I cannot really run your case directly on our side and try to reproduce the problem.

My hunch is that your nginx config has internal redirects that bypass the log_by_lua* handler which decrements the counter by calling lim:leaving(). This is the common reason for an ever incrementing counter.

agentzh avatar Nov 06 '17 17:11 agentzh

@junos BTW, you can (temporarily) enable the nginx debugging logs in your OpenResty to confirm the internal redirects or other important details of your request. If you are using the binary pre-built packages provided by OpenResty, then you can simply switch to the openresty-debug package temporarily and configure error_log logs/error.log debug; in your nginx.conf (for all the error_log directives there). If you are compiling OpenResty from source, then you should pass the --with-debug option to the ./configure command and still always use error_log logs/error.log debug; in your nginx.conf consistently.

agentzh avatar Nov 06 '17 17:11 agentzh

@agentzh, I enable the debug log, and I didn't find any internal redirect. I have no clue why this happened. On the other hand, For our production code, concurrent counting working well at some concurrent level, but when the concurrent level goes really high like > 1000, counting keep growing, Is it possible for this? Out-of-Sync Counter Prevention https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/conn.md#out-of-sync-counter-prevention, Is there any way when this happened, we could reset the counting number?

junos avatar Nov 06 '17 21:11 junos

@junos You should check if your nginx workers ever crashed. Crashes must always be fixed.

agentzh avatar Nov 06 '17 23:11 agentzh