lua-resty-checkups icon indicating copy to clipboard operation
lua-resty-checkups copied to clipboard

Manage Nginx upstreams in pure Lua.

Results 10 lua-resty-checkups issues
Sort by recently updated
recently updated
newest added

Hi I am developing a cdn api system (proxy server) and I need to add load balancer option by having heartbeat and ... it's important for me to have runtime...

![image](https://user-images.githubusercontent.com/26765476/50328095-a49d8f80-052c-11e9-9a11-c70f3ac6cedb.png) 如图,经过无数次的验证,无法解决: 2018/12/21 14:24:30 [error] 58540#0: *280 lua entry thread aborted: runtime error: /usr/local/lib/lua/resty/checkups/api.lua:65: attempt to index field 'checkups' (a nil value) stack traceback: coroutine 0: /usr/local/lib/lua/resty/checkups/api.lua: in function 'ready_ok'

``` upstream backend { server 0.0.0.0; balancer_by_lua_block { require "wario.balancer" } } ``` 如上面的代码块,lua-resty-checkups提供的update和delete好像都是预先需要在nginx.conf里面声明了一个backend,才能去动态变更后段的servers,能不能连这个指令也能通过lua来创建呢? 因为我试过lua-resty-checkups的api是做不到的,不知道是不是姿势不对,希望能得到回复,谢谢

主要用于balance动态选择,现在多个域名对应相同后端,只能配置多次。

Any plan to register this repo to [OPM - OpenResty Package Manager](https://opm.openresty.org/)?

I did a test found Request distributed Uneven with consistent_hash method. upstream test have two servers: upstream test{ server 192.168.46.111:80; server 192.168.46.110:80; } hash arguments is cid. I use 10w...

hello: 读了整个代码,发现各个节点健康检查的状态只是存放到了state dict中, 除了get_status这个API将健康状况展示之外,在做流量转发的时候,并没有使用到这些状态。 是我没看到么?try_cluster选节点的时候貌似也没有涉及到健康状态。 谢谢。

hello,请教下: 1 为何创建check timer要用lock的方式而不是用ngx.worker.id的方式来创建唯一timer?是有什么特殊的考虑吗? 2 在lock方式中(目前的方式),base.CHECKUP_TIMER_KEY在设置到mutex shdict中时为何要设置超时时间?(默认60s) , 谢谢