lua-resty-upstream-healthcheck
lua-resty-upstream-healthcheck copied to clipboard
lua entry thread aborted: runtime error: string length overflow stack traceback
use curl for test
`
curl http://127.0.0.1/status
curl: (52) Empty reply from server
`
and log is:
`
2016/03/02 13:50:01 [error] 18864#0: *150 lua entry thread aborted: runtime error: string length overflow
stack traceback:
coroutine 0:
[C]: in function 'get_primary_peers'
/etc/nginx/lualib/resty/upstream/healthcheck.lua:682: in function 'status_page'
content_by_lua(default.conf:102):4: in function <content_by_lua(default.conf:102):1>, client: 127.0.0.1, server: xxx, request: "GET /status HTTP/1.1", host: "127.0.0.1"
`
`
cat /etc/nginx/nginx.conf
user nginx;
worker_processes 32;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
error_log /home/ceph/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 65535;
events {
worker_connections 20000;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
vhost_traffic_status_zone;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent $request_length "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
access_log /home/ceph/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 30;
upstream foo.com {
server 127.0.0.1:801;
server 192.168.170.1:80;
}
lua_shared_dict healthcheck 1m;
lua_socket_log_errors off;
init_worker_by_lua_block {
local hc = require "resty.upstream.healthcheck"
local ok, err = hc.spawn_checker({
shm = "healthcheck",
upstream = "foo.com",
type = "http",
http_req = "GET / HTTP/1.1\r\nHost: 127.0.0.1\r\n\r\n",
interval = 2000,
timeout = 1000,
fall = 3,
rise = 2,
valid_statuses = {200, 302},
concurrency = 10,
})
if not ok then
ngx.log(ngx.ERR, "failed to spawn health checker: ", err)
return
end
}
include /etc/nginx/conf.d/*.conf;
}
`
`
cat /etc/nginx/conf.d/default.conf
server {
listen 80 backlog=10240;
gzip off;
client_max_body_size 0;
server_name xxx;
location / {
set $target '';
proxy_buffering off;
proxy_ignore_client_abort on ;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://foo.com;
}
location = /status {
access_log off;
default_type text/plain;
content_by_lua_block {
local hc = require "resty.upstream.healthcheck"
ngx.say("Nginx Worker PID: ", ngx.worker.pid())
ngx.print(hc.status_page())
}
}
}
`
`
nginx -V
nginx version: openresty/1.9.7.3
built by gcc 4.9.2 (Debian 4.9.2-10)
built with OpenSSL 1.0.1k 8 Jan 2015
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx/nginx --with-cc-opt=-O2 --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.58 --add-module=../xss-nginx-module-0.05 --add-module=../ngx_coolkit-0.2rc3 --add-module=../set-misc-nginx-module-0.29 --add-module=../form-input-nginx-module-0.11 --add-module=../encrypted-session-nginx-module-0.04 --add-module=../srcache-nginx-module-0.30 --add-module=../ngx_lua-0.10.0 --add-module=../ngx_lua_upstream-0.04 --add-module=../headers-more-nginx-module-0.29 --add-module=../array-var-nginx-module-0.04 --add-module=../memc-nginx-module-0.16 --add-module=../redis2-nginx-module-0.12 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.14 --add-module=../rds-csv-nginx-module-0.07 --with-ld-opt='-Wl,-rpath,/etc/nginx/luajit/lib -Wl,-rpath,/etc/nginx/luajit/lib' --add-module=./openresty-1.9.7.3/bundle/nginx-module-vts --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_stub_status_module --with-http_ssl_module
`
@diluga This looks weird. Can you try reproducing the issue without the 3rd-party module nginx-module-vts? My hunch is that this module is the culprit.
yes,i found what my problem,i have config another upstream named cnzone_write like this:
upstream cnzone_write { zone http_backend 64k; server 127.0.0.1:801 weight=3; .......
if i remove " zone http_backend 64k;"the problem is fix.but without this config nginx-module-vts will not work!
@diluga Then it looks like a problem in the nginx-module-vts thing which is not maintained by us at all and we cannot really help here.
@diluga Sorry, my bad. The "zone" directive itself is in the standard nginx module. Will you verify if you can reproduce the issue with the zone
directive but without nginx-module-vts? Thanks!
Yes, I also encountered this problem, when I open the zone directive, the lua-resty-upstream-healthcheck can not be used, look at the following:
nginx.conf
upstream backends {
#zone zone_for_backends 1m;
server 127.0.0.1:8081 fail_timeout=5 max_fails=50;
server 127.0.0.1:8082 fail_timeout=5 max_fails=50;
server 192.168.33.20:8080 fail_timeout=5 max_fails=50;
server 192.168.33.21:8080 fail_timeout=5 max_fails=50;
}
upstream backends {
#zone zone_for_backends 1m;
#server 127.0.0.1:8081 weight=4 fail_timeout=20 max_fails=25;
server 127.0.0.1:8081 fail_timeout=5 max_fails=50;
server 127.0.0.1:8082 fail_timeout=5 max_fails=50;
server 192.168.33.20:8080 fail_timeout=5 max_fails=50;
server 192.168.33.21:8080 fail_timeout=5 max_fails=50;
}
lua_shared_dict healthcheck 1m; lua_socket_log_errors off; init_worker_by_lua_file /Data/code/test/lua/upstream/init_worker.lua;
server { listen 80; server_name up.test.com; root /Data/code/test/lua/upstream/; access_log /Data/logs/nginx/access/up.test.com.log; error_log /Data/logs/nginx/error/up.test.com.log; lua_code_cache off;
location =/status {
#access_log off;
#allow 127.0.0.1;
#deny all;
default_type text/plain;
content_by_lua_block {
local hc = require "resty.upstream.healthcheck"
ngx.say("Nginx Worker PID:", ngx.worker.pid())
ngx.print(hc.status_page())
}
}
location / {
proxy_pass http://backends;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
➜ curl up.qing.com/status Nginx Worker PID:66391 Upstream backends (NO checkers) Primary Peers 127.0.0.1:8081 up 127.0.0.1:8082 up 192.168.33.20:8080 up 192.168.33.21:8080 up Backup Peers
When I open the zone directive: ➜ curl up.qing.com/status Upstream backends (NO checkers) Primary Peers 127.0.0.1:8081 up up up up Backup Peers
Very much hope to get attention, thks!
@gaoshangs As I've already commented above in this issue, the zone
directive is relatively new and is not supported by this library yet.
Get it, think you.
@agentzh, it's been a couple years now. Has there been any change here?
I have found that this issue is biting me as well. Any movement on the issue? Thank you very much!