go-redis
go-redis copied to clipboard
ERR max number of clients reached
I encountered a problem that a web-service(deployed at k8s, 20 pods) ran out of the redis-server max clients(=30000). The redis client pool was configured with PoolSize=100, I thought the connections could not exceed 100*20=2000(much less than the server's limit 30000), but the error "ERR max number of clients reached" occurred anyway. I could not connect to redis-server via redis-cli, so I had to restart the redis-server. Subsequently, the errors disappeared. I figured out that some big keys would accidentally become hot and trigger the flow control which incurs error "read tcp xx.xx.xx.xx:60858->xx.xx.xx.xx:6379: i/o timeout", but what confused me now is that why PoolSize=100 is ineffective?
Steps to Reproduce
Environment: Redis 6.2.6 go-redis v9.0.0-beta.2 go go1.18.4 Linux fedora 5.18.13-200.fc36.x86_64
Steps to reproduce:
- docker run -p 6379:6379 redis
- set redis-server maxclients to 200
- go run main.go4.
package main
import (
"context"
"strings"
"sync"
"time"
"github.com/go-redis/redis/v9"
)
func main() {
client := redis.NewClient(&redis.Options{
Network: "",
Addr: "192.168.56.29:6379",
MaxRetries: 3,
PoolSize: 100,
PoolTimeout: 2 * time.Second,
ReadTimeout: 2 * time.Second,
WriteTimeout: 2 * time.Second,
})
ctx := context.Background()
content := strings.Repeat("a", 1024*1024*2) // 2MB
wg := sync.WaitGroup{}
for i := 0; i < 10000; i++ {
go func() {
wg.Add(1)
for {
client.Set(ctx, "temp", content, time.Duration(30)*time.Second).Result()
}
}()
}
wg.Wait()
}
we have same experience here, somehow connection can be above PoolSize
and one
If you don't use large keys that cause timeouts, it works. See Timeouts and the next section.