redis_rate icon indicating copy to clipboard operation
redis_rate copied to clipboard

Rate limiter not working as expected

Open jose-zenledger opened this issue 3 years ago • 4 comments

Perhaps this is an implementation issue. See the following file which shows the per second rate not working. If you set the rate limiter to 100 per second and you have 101 requests, it should take more than a second to finish since only 100/101 could have run in the first second and the last request would have to wait until the next second.

package main

import (
	"context"
	"github.com/go-redis/redis/v8"
	"github.com/go-redis/redis_rate/v9"
	"github.com/stretchr/testify/require"
	"os"
	"sync"
	"sync/atomic"
	"testing"
	"time"
)

type Limiter interface {
	Allow(ctx context.Context) (time.Duration, error)
}

type Rediser interface {
	Eval(ctx context.Context, script string, keys []string, args ...interface{}) *redis.Cmd
	EvalSha(ctx context.Context, sha1 string, keys []string, args ...interface{}) *redis.Cmd
	ScriptExists(ctx context.Context, hashes ...string) *redis.BoolSliceCmd
	ScriptLoad(ctx context.Context, script string) *redis.StringCmd
	Del(ctx context.Context, keys ...string) *redis.IntCmd
}

func NewRedisLimiter(r Rediser, key string, perSec int) Limiter {
	return &redisLimiter{
		limiter: redis_rate.NewLimiter(r),
		key:     key,
		perSec:  perSec,
	}
}

type redisLimiter struct {
	limiter *redis_rate.Limiter
	key     string
	perSec  int
}

func (l *redisLimiter) Allow(ctx context.Context) (time.Duration, error) {
	r, err := l.limiter.Allow(ctx, l.key, redis_rate.PerSecond(l.perSec))
	if err != nil {
		return 0, err
	}

	return r.RetryAfter, nil
}

func TestRedisLimiter_Allow(t *testing.T) {
	ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
	defer cancel()

	radd := os.Getenv("REDIS_ADDR") // set this in the env to host:port
	opts := redis.Options{Addr: radd}
	rc := redis.NewClient(&opts)
	defer rc.Close()

	perSecond := 100 // set the per second rate

	var val int64
	limiter := NewRedisLimiter(rc, "TestRedisLimiter_Allow", perSecond)
	runs := perSecond + 1 // one more than the per second rate (last request should be in the next second)
	wg := sync.WaitGroup{}
	wg.Add(runs)
	start := time.Now()
	for i := 0; i < runs; i++ {
		go func() {
			defer wg.Done()

			retryAfter, err := limiter.Allow(ctx)
			require.NoError(t, err)

			for retryAfter > 0 {
				time.Sleep(retryAfter)
				retryAfter, err = limiter.Allow(ctx)
				require.NoError(t, err)
			}

			atomic.AddInt64(&val, 1)
		}()
	}
	wg.Wait()
	elapsed := time.Since(start)
	require.GreaterOrEqual(t, elapsed, time.Second) // one more than the per second rate (last request should be in the next second)
	require.Equal(t, runs, int(val))
}

Here is a docker-compose.yml that should be able to run the test via docker compose up test after go mod init && go mod tidy:

version: "3.8"

services:
  test:
    image: golang:1.18
    volumes:
      - .:/app
    working_dir: /app
    environment:
      - REDIS_ADDR=redis:6379
    command: sh -c "go test -coverprofile=cover.out ./... -race && go tool cover -html=cover.out -o cover.html"
    depends_on:
      redis:
        condition: service_healthy


  redis:
    image: redis:alpine
    expose:
      - "6379"
    healthcheck:
      test: ["CMD", "redis-cli","ping"]

The only thing I can think of is that maybe the first second doesn't really count some how? I tried 100 per second for 1000 total and am getting a little over 9 seconds

jose-zenledger avatar Apr 26 '22 17:04 jose-zenledger

At 100 per second rate limit I get the following results:

101 requests: 36.880584ms
201 requests: 1.014007251s
301 requests: 2.021583626s
401 requests: 3.016126084s
501 requests: 4.026494002s
601 requests: 5.023141128s
701 requests: 6.014478627s
801 requests: 7.01344642s
901 requests: 8.029704919s

Seems like the alg is off for the first second. Any idea on whats going on?

jose-zenledger avatar Apr 26 '22 17:04 jose-zenledger

Seems like the alg is off for the first second. Any idea on whats going on?

Try with burst 1. https://github.com/go-redis/redis_rate/issues/58#issuecomment-821225658

fracasula avatar Oct 21 '22 15:10 fracasula

@jose-zenledger I also found the same problem, how did you solve it in the future? Thanks

To-echo avatar May 09 '23 15:05 To-echo

I have the same issue

saeedvaziry avatar Aug 20 '23 19:08 saeedvaziry