Error in redis parser when using client tracking
Version: 4.5.1
Redis Server: redis_version:7.0.8 redis_build_id:73fe4a3beb619f6 redis_mode:cluster os:Darwin 22.3.0 x86_64
Platform: MacOSX 13.2
Description:
When calling client.client_tracking_on(clientid=id, prefix=['foo'], bcast=True) on clustered redis parser crashes.
It works in normal client (non clustered). Same error when using with or without hiredis==2.2.2
stacktrace:
File "/Users/myuser/projects/xxx/redis_bug.py", line 20, in <module>
client1.client_tracking_on(clientid=id, prefix=['foo'], bcast=True)
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/commands/core.py", line 603, in client_tracking_on
return self.client_tracking(
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/commands/core.py", line 684, in client_tracking
return self.execute_command("CLIENT TRACKING", *pieces)
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/cluster.py", line 1074, in execute_command
raise e
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/cluster.py", line 1047, in execute_command
target_nodes = self._determine_nodes(
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/cluster.py", line 875, in _determine_nodes
slot = self.determine_slot(*args)
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/cluster.py", line 945, in determine_slot
keys = self._get_command_keys(*args)
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/cluster.py", line 912, in _get_command_keys
return self.commands_parser.get_keys(redis_conn, *args)
File "/Users/myuser/.virtualenvs/bitstamp38/lib/python3.8/site-packages/redis/commands/parser.py", line 105, in get_keys
range(command["first_key_pos"], last_key_pos + 1, command["step_count"])
ValueError: range() arg 3 must not be zero
code
import time
from redis import cluster
def handler(msg):
print('MSG: {}'.format(msg))
if __name__ == '__main__':
# client1 = redis.client.Redis.from_url('redis://localhost:6379/0') # works
client1 = cluster.RedisCluster.from_url('redis://localhost:6379/0')
id = client1.client_id()
pubsub = client1.pubsub()
pubsub.subscribe(**{'__redis__:invalidate': handler})
pubsub.run_in_thread(daemon=True, sleep_time=None)
client1.client_tracking_on(clientid=id, prefix=['foo'], bcast=True)
client1.get('fooASF')
for i in range(10):
client1.set('fooASF', '23423423ff')
time.sleep(100)
This issue is marked stale. It will be closed in 30 days if it is not updated.
Any update on the issue?
Hi @matejsp,
Could you please clarify what the end goal was? Were you aiming to implement something similar to client-side caching in a cluster setup, or was there another objective behind this approach?
I’m currently working on a fix, and while there are several improvements that can make the behavior more robust, I also see a few potential issues with using this functionality in the current form.
If the intention is to enable client-side caching, I’d strongly recommend relying on the functionality already built into the client(in latest versions) rather than invoking the underlying commands directly — especially in cluster mode, where manually sending these commands can easily lead to inconsistent or unexpected behavior. The client handles the orchestration, routing, and state tracking that would otherwise be error-prone to implement manually.
You can enable client-side caching via the cache_config argument when creating the Cluster client. Here is a basic example:
from redis.cluster import RedisCluster
from redis.client import CacheConfig
cluster_client = RedisCluster(
host="localhost",
port=6379,
protocol=3,
decode_responses=True,
cache_config=CacheConfig(),
)
Let me know if this aligns with what you were aiming to achieve, or if you had something different in mind.
Yes, I was trying to implement client-side caching using cluster setup. I managed to get it done using non clustered client, but not clustered one (well 3 years ago :D)
I want to cache only subbset of keys. I think extending is_allowed_to_cache would solve this if somehow I can pass prefix filtering to: https://redis.io/docs/latest/commands/client-tracking/
Later I implemented this functionality using redis streams and invalidating keys in local cache when getting notification together with handler code on every change that can do extra stuff (like logging if the change was propagated to every server).
o implement client-side caching using cluster setup. I managed to get it done using non clustered client, but not clustered one (well 3 years ago :D)
I want to cache only subbset of keys. I think extending is_allowed_to_cache would solve this if somehow I can pass prefix filtering to:
To achieve that, you will need to extend either the CacheInterface or the DefaultCache class(or some other existing class that extends the CacheInterface).
You can do the filtering by key prefix in the is_cachable method implementation.
After that in the cache_config object you provide your class by setting the cache_class argument when you create the config object.