lua-resty-redis
lua-resty-redis copied to clipboard
Redis cluster client
Redis cluster client.
This is implementated as a wrapper over the existing resty-redis client adding cluster functionality. Basic functionality except pipelining is working. Working on pipelining support.
- Each cluster is identified by a "cluster id".
- Cluster representation identified by "cluster id" is shared across all requests within a worker process.
- Supports multiple redis clusters.
- Connection pooling as present in the existing resty-redis client is available.
- Performance nearly same as original resty-redis under normal conditions.
ref: https://github.com/antirez/redis-rb-cluster
Example:
local redis_cluster = require("redis_cluster")
local cluster_id = "test_cluster"
-- Subset of nodes within the cluster
local startup_nodes = {
{"127.0.0.1", 7004},
{"127.0.0.1", 7000},
{"127.0.0.1", 7001}
}
local opt = {
timeout = 100,
keepalive_size = 100,
keepalive_duration = 60000
}
local rc = redis_cluster:new(cluster_id, startup_nodes, opt)
rc:initialize()
local ok, err = rc:set("key1", "val1")
if not ok then
ngx.say("Unable to set key1: ", err)
else
ngx.say("key1 set result: ", ok)
end
local res, err = rc:get("key1")
if not res then
ngx.say("Failed to get key1: ", err)
else
ngx.say("key1:", res)
end
-- (same as above, slightly faster)
res, err = rc:send_cluster_command("get", "key1")
if not res then
ngx.say("Failed to get key1 with send_cluster_command: ", err)
else
ngx.say("key1 using send_cluster_command:", res)
end
Open Issues:
- At the time of initialization(once per worker) and cluster reconfiguration, all requests during the time window of the data refresh(should be quite small, few ms) will try to update the cluster representation. It doesn't seem to have an effect on functionality but the condition needs to be removed as it would cause a small spike in latency during cluster reconfiguration. Would like to have opinion on this one.
- Pipeline "asking" request during cluster reconfiguration.
Please review and let me know of any comments.
@h4lflife Thank you for the contribution! I really appreciate it :) I'll look into your patch when I have some spare time. Been busy with $work lately, sorry :) Thanks again!
It's old pull request but I have question, because I don't know if I understand your code correctly.
You have hard-coded limit for max. 20 clusters and 500 nodes in each cluster, right? Or is Lua allocating space for 20 and 500 entries and if we exceed that it's allocating bigger memory space?
If yes (there are limits), I think one call for your code should be one cluster, and the nodes should be stored in dynamically sized array.
@agentzh @h4lflife Are there any current activities on this pull request? We need the cluster support too and love to see it moved forward.
@zhduan My hunch is that it's better to be implemented in a separate wrapper library, in the same spirit of @pintsized's lua-resty-redis-connector
thanks for implementing it, but i have a problem that whether there are connection pool in your code?
Whether to support password access? like the red:auth("foobared").
@agentzh 请问怎么配置验证密码
@wjs57y Please, no Chinese here. This place is considered English only. It is especially rude to reply to an unrelated pure English issue thread. If you really want to use Chinese, please join the openresty (Chinese) mailing list instead. Please see https://openresty.org/en/community.html Thanks for your cooperation.
Here is my error msg : Uninitialized cluster. I follow the Example, what`s wrong? Is there anyone have a same problem?