rueidis icon indicating copy to clipboard operation
rueidis copied to clipboard

Client side caching with MGet

Open GiedriusS opened this issue 3 years ago • 9 comments
trafficstars

Hello! Thank you for your work on this awesome library. Pardon me for a stupid question because I'm kind of new to Redis. I have noticed that MGet() doesn't support client-side caching. Are there any particular reasons for that? If it's just a matter of implementation then would you be open to contributions for implementing client-side caching for that command?

We want to use MGet() in particular to avoid many hops between the client and the server. For example, requesting >10000 keys would be painful with a regular Get().

GiedriusS avatar Jul 12 '22 12:07 GiedriusS

Hi @GiedriusS,

I am not sure that client-side caching can work with MGET. Maybe it could.

That’s to say if it can, then how will the API look like and behave to your expectation?

rueian avatar Jul 12 '22 12:07 rueian

At least in the use-case of Thanos (https://github.com/thanos-io/thanos) there are different types of keys and each type has a different TTL thus it should be enough to be able to set an identical TTL in a cached MGet. If you need flexibility, you could do Get() for each key separately with its own TTL. Does that sound good?

GiedriusS avatar Jul 13 '22 12:07 GiedriusS

Hi @GiedriusS,

I have verified that the MGET does work with client-side caching: Redis will simply treat it as tracking multiple GETs.

But there are some difficulties to implement client-side caching support for it. Especially we will need to rewrite the MGET command for those keys that are not cached on the client-side. And we also need to send multiple PTTL commands for those keys in the Redis transaction to get TTLs on the server side to shorten their TTLs on the client side. These factors could make the implementation very complicated.

I may try to implement it this weekend, but I am not sure I can succeed due to its complexity.

Another way to reduce hops between rueidis and Redis is just sending commands concurrently. They will be pipelined automatically.

BTW, I am not sure using MGET is a good idea in Thanos since Redis Cluster doesn't allow MGET.

rueian avatar Jul 13 '22 14:07 rueian

I see. Thanos itself hashes the keys and distributes them accordingly so I'm not sure if Redis Cluster is a necessity. Of course it brings benefits but it's not necessary. Let me know if there's any progress, I can test out your library & changes on a production Thanos cluster with lots of requests or if you need any help with reviewing code

GiedriusS avatar Jul 15 '22 12:07 GiedriusS

Hi @GiedriusS,

v0.0.62 is released. It is the first version having MGET client side caching.

If you are interesting in the implementation, you can checkout these two places:

  1. https://github.com/rueian/rueidis/blob/518f7ce29f9191ac783af9d1d642991ceab971c3/pipe.go#L740
  2. https://github.com/rueian/rueidis/blob/518f7ce29f9191ac783af9d1d642991ceab971c3/pipe.go#L291

And please let me know if this works to your expectation.

rueian avatar Jul 17 '22 16:07 rueian

Jumping in the conversation.

  1. Since MGET can be cacheable, can JSON.MGET be as well?

  1. From what I see, you've made it use pipe.DoMulti(). Is there a way to implement something similar to the rueidis.DedicatedClient so we could do something along the way of:
dedicatedClient, cancel := client.Dedicate()
defer cancel()

var cmds rueidis.CacheableCommands // cacheable commands

for i := 0; i < 100; i++ {
    cmds = append(
        cmds,
        dedicatedClient.B().Get().Key("k:"+strconv.Itoa(i)).Cache(),
    )
}

for _, res := range dedicatedClient.CacheDoMulti( // CacheDoMulti
    context.Background(),
    time.Duration(time.Second * 60), // ttl
    cmds...,
    ) {
    // result handling...
}

So do the MULTI (batch) commands with cacheable commands, not just rueidis.Commands.

NOTE Not sure what you meant with:

Another way to reduce hops between rueidis and Redis is just sending commands concurrently. They will be pipelined automatically.

Does that mean it's already possible?


  1. And another, far fetched, but is any client caching possible with FT.AGGREGATE?

Delicious-Bacon avatar Jul 17 '22 18:07 Delicious-Bacon

  1. Since MGET can be cacheable, can JSON.MGET be as well?

Yes, it will be cacheable in the v0.0.63.

  1. And another, far fetched, but is any client caching possible with FT.AGGREGATE?

It is not possible, because FT indexes do not send invalidation to clients when they are changed or their documents are changed.


So do the MULTI (batch) commands with cacheable commands, not just rueidis.Commands.

To batch them into one MULTI request could be very hard. Especially now we have cacheable MGET and JSON.MGET. Currently, I have no idea about how to implement it.

Not sure what you meant with:

Another way to reduce hops between rueidis and Redis is just sending commands concurrently. They will be pipelined automatically.

Does that mean it's already possible?

Kind of. Rueidis will pipeline concurrent non-blocking redis commands automatically. So you can do something like this to reduce network round trips and number of system calls:

for i := 0; i < 100; i++ {
	go func() {
		client.Do(ctx, client.B().Get().Key("...").Build()).ToString()
	}()
}

rueian avatar Jul 18 '22 16:07 rueian

Hi, @Delicious-Bacon

There is a new DoMultiCache in v0.0.66:

client.DoMultiCache(ctx,
    rueidis.CT(c.B().Get().Key("k1").Cache(), 1*time.Minute),
    rueidis.CT(c.B().Get().Key("k2").Cache(), 2*time.Minute))

Please note that the DoCache and DoMultiCache are not available in the dedicated client. And you don't have to use the dedicated client in order to perform DoMulti now. You can invoke DoMulti directly on a normal client now.

rueian avatar Jul 27 '22 14:07 rueian

Hi, @rueian !

That sounds big! 😄

DoMulti invocation on a normal client eases things up even more. 💯

Another job well done, @rueian ! Huge improvements. 🥇

Delicious-Bacon avatar Jul 28 '22 17:07 Delicious-Bacon

:medal_sports: thanks for this feature! Works really great.

GiedriusS avatar Aug 17 '22 15:08 GiedriusS