lua-resty-redis icon indicating copy to clipboard operation
lua-resty-redis copied to clipboard

lua tcp error

Open RyouZhang opened this issue 10 years ago • 11 comments
trafficstars

13561#0: *31675425 lua tcp socket read timed out, client: 100.97.177.53, server: localhost

lua redis的链接是否使用连接池?如果同时并发数量过高是否有排队功能?

RyouZhang avatar Oct 26 '15 07:10 RyouZhang

@RyouZhang Please, no Chinese here. This place is considered English only. If you really want to use Chinese, please join the openresty (Chinese) mailing list instead. Please see https://openresty.org/#Community for more details.

Regarding your questions,

  1. lua-resty-redis does enable connection pooling if you call the set_keepalive method everytime you finish using the current redis object (ensure you always check the return values of this method call to handle any errors properly). See the official documentation for more details. However, the connection pool is not used by default.
  2. There's no automatic queueing support based on the size of the connection pool though this is a planned feature that will get implemented soon. In the meantime, you can consider using the lua-resty-limit-traffic library to queue your backend requests before reaching lua-resty-redis.

agentzh avatar Oct 26 '15 11:10 agentzh

thanks Another question, why the lua socket getreusedtimes alway return nil? And I have call the setkeepalive method everytime I finish using.

RyouZhang avatar Oct 26 '15 11:10 RyouZhang

@RyouZhang Then your set_keepalive method call may always return a failure. Have you checked its return values?

agentzh avatar Oct 26 '15 11:10 agentzh

sometimes, the set_keepalive return nil, I think it reached the max pool size.

RyouZhang avatar Oct 26 '15 12:10 RyouZhang

@RyouZhang You can get the string describing the error in the second return value (when the first one is nil). Let's stop guessing :)

agentzh avatar Oct 26 '15 12:10 agentzh

OK, you are right, return nil, the error like this

*1004613 lua tcp socket read timed out, client: 192.168.0.192, server: localhost, request: "GET /req HTTP/1.1", host: "192.168.0.192:8080" 2015/10/26 12:06:30 [error] 308#0: *1004613 [lua] gdm.lua:114: SetKeepalive(): closed, client: 192.168.0.192, server: localhost, request: "GET /req HTTP/1.1", host: "192.168.0.192:8080"

RyouZhang avatar Oct 26 '15 12:10 RyouZhang

@RyouZhang Okay, so your redis connection is already closed right before calling set_keepalive (like due to an earlier explicit close call or some previous method calls triggering a fatal error, like timeout).

agentzh avatar Oct 26 '15 12:10 agentzh

@RyouZhang Maybe you are just using a too small value for your timeout threshold of the redis connections?

agentzh avatar Oct 26 '15 12:10 agentzh

As you said, there's no automatic queueing support based on the size of the connection pool though, I want to know ,whether it will create a new connection when pool->cache is empty? I mean the ngx_tcp_sock:connect,not only in redis.

sylarXu avatar Mar 22 '16 09:03 sylarXu

@sylarXu Yes.

agentzh avatar Mar 22 '16 19:03 agentzh

@sylarXu Same as the standard connection pool in the nginx core:

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

agentzh avatar Mar 22 '16 19:03 agentzh