Manual Pipeline Rueidis vs Pipeline go-redis v9
Hello! I have a consumer that inserting data to using redis. I use Redis Cluster server and use go-redis for the redis driver before. I try looking another options to get more performance since the consumer need to be process the message as soon as possible so i try the rueidis.
After deploy and load tested, i notice there are some different latency when using go-redis (left) and rueidis (right).
I use these options
clientOpts := rueidis.ClientOption{
InitAddress: .....,
RingScaleEachConn: 10,
ConnWriteTimeout: 1000 * time.Second,
ShuffleInit: true,
PipelineMultiplex: 4,
}
The consumer does maximum 10.000 SETs command to the pipeline. Is there anything wrong with my rueidis implementation for manual pipelining .DoMulti()? or there are other ways to make it performance like the benchmark? Thank you
The consumer does maximum 10.000 SETs command to the pipeline.
Do you mean 10000 or just 10 commands at once? 10000 is quite many. You may try to increase the ReadBufferEachConn and WriteBufferEachConn options.
PipelineMultiplex: 4
This will result in 2^4=16 connections to each of your redis nodes. If you have 3 nodes in the cluster, then there will be 48 connections. This may be too many connections if you don't have enough CPU cores.
Could you give me more details on the hardware spec of your client and redis cluster?
Do you mean 10000 or just 10 commands at once? 10000 is quite many. You may try to increase the ReadBufferEachConn and WriteBufferEachConn options.
I mean there are 10_000 commands for one pubsub message at once. What value should i use for ReadBufferEachConn and WriteBufferEachConn options?
Could you give me more details on the hardware spec of your client and redis cluster?
The redis cluster has 6 nodes (3 master node and 3 slave replica), each hardware spec consist:
- 4 vCPU @2.2GHz
- 32 GB RAM
my consumer service (redis client) used kube pod min 3 max 10 pods, each pod spec:
- CPU -> requested = 0,5; limit = 1,0
- RAM -> requested = 1 GB; limit = 2 GB
What is the best config i could use?
What value should i use for ReadBufferEachConn and WriteBufferEachConn options?
It depends on how large your 10_000 commands are, including command length, key length, and value length.
Since 10_000 commands are too many, you may consider using client.Dedicate() to avoid head of line blocking in each pipeline:
var pool = sync.Pool{
New: func() interface{} {
commands := make(rueidis.Commands, 100000)
return &commands
},
}
func do(client rueidis.Client, key string) error {
cmds := pool.Get().(*rueidis.Commands)
defer pool.Put(cmds)
cc, cancel := client.Dedicate()
defer cancel()
for i := 0; i < len(*cmds); i++ {
(*cmds)[i] = cc.B().Set().Key(key).Value(key).ExSeconds(1).Build()
}
defer clear(*cmds)
for _, resp := range cc.DoMulti(context.Background(), *cmds...) {
if err := resp.Error(); err != nil {
return err
}
}
return nil
}
CPU -> requested = 0,5; limit = 1,0
The 1 limit may be too hard for your 16*6 connections. I would recommend leaving the PipelineMultiplex as default and setting limit = 2,0 or above.
It depends on how large your 10_000 commands are, including command length, key length, and value length.
When doing SET command, one command will have something like this :
SET {QWER:QW12}:234 208300|62800|21300|21300|1725062400|4700|1100|0|10|3|1|QWER|234|QW12|M|0|0|0|1725073001 EX 309798
SET {QWER:QW12}:234 208300|62800|21300|21300|1725062400|4700|1100|0|10|3|1|QWER|234|QW12|M|0|0|0|1725073001 EX 309798
SET {QWER:QW12}:234 208300|62800|21300|21300|1725062400|4700|1100|0|10|3|1|QWER|234|QW12|M|0|0|0|1725073001 EX 309798
... until max 10_000 commands
it will be multiplied until maximum 10_000 commands. But, many messages rarely got the maximum commands, maybe like 5000 or 6000 commands pipeline. So, what value should i use for ReadBufferEachConn and WriteBufferEachConn options based on these conditions?
Since 10_000 commands are too many, you may consider using client.Dedicate() to avoid head of line blocking in each pipeline:
var pool = sync.Pool{ New: func() interface{} { commands := make(rueidis.Commands, 100000) return &commands }, } func do(client rueidis.Client, key string) error { cmds := pool.Get().(*rueidis.Commands) defer pool.Put(cmds) cc, cancel := client.Dedicate() defer cancel() for i := 0; i < len(*cmds); i++ { (*cmds)[i] = cc.B().Set().Key(key).Value(key).ExSeconds(1).Build() } defer clear(*cmds) for _, resp := range cc.DoMulti(context.Background(), *cmds...) { if err := resp.Error(); err != nil { return err } } return nil }
why is the code using sync.Pool? what is it use for?
The 1 limit may be too hard for your 16*6 connections. I would recommend leaving the PipelineMultiplex as default and setting limit = 2,0 or above.
Okey, i will set back to the 2,0 and will see the config tweak today
Thanks a bunch
why is the code using sync.Pool? what is it use for?
It looks like you are doing many DoMulti concurrently, then reusing the commands slices will be very important for your performance sake.
after i implement the sync.Pool with client.Dedicate(), i got error panic: cross slot command in Dedicated is prohibited. so i think i can't use the dedicate func or is there something wrong with my implementation?
No, you do nothing wrong. It is an unfortunate limitation currently that you can't use DoMulti with cross-slot commands on a dedicated connection, since a DedicatedClient literally occupies a single connection to just one Redis node. The first command slot decides which node to connect.
Hi @adibaulia,
The v1.0.46-alpha.2 will do the client.Dedicate() internally when you pass over 2000 commands to a DoMulti() and you will have no panic: cross slot command in Dedicated is prohibited anymore. Would you like to give it a try?
i got error panic: cross slot command in Dedicated is prohibited
In this case, after i look my code, there is something wrong with my implementation and i fix it and it running well but the performance seems much better than i use the go-redis like before.
Would you like to give it a try?
So, i decided to use back the go-redis since i got more low latency when processing the pipeline. I would give a try maybe in the next time available!
Ok, since the improvement on the super long pipeline has been merged, I will close this issue for now. Please feel free to update your result or reopen this issue if necessary.