aiocache
aiocache copied to clipboard
'ERR Protocol error: invalid multibulk length' when calling `clear()`
Steps to reproduce:
- I have about 5 millions of items in Redis.
- Call
cache.clear()without arguments.
As a temporary workaround I had to call redis-cli FLUSHALL from terminal.
My proposed solution:
# aiocache/backends/redis.py
from aioitertools.more_itertools import chunked
max_multibulk_length = 1024 * 1024 # https://github.com/StackExchange/StackExchange.Redis/issues/201
@conn
async def _clear(self, namespace=None, _conn=None):
if namespace:
keys = await _conn.keys("{}:*".format(namespace))
if len(keys) > max_multibulk_length:
async for keys_chunk in chunked(keys, max_multibulk_length - 1):
await _conn.delete(*keys_chunk)
else:
await _conn.delete(*keys)
else:
await _conn.flushdb()
return True
or, simplified code above (without redundant IF-ELSE):
# aiocache/backends/redis.py
from aioitertools.more_itertools import chunked
max_multibulk_length = 1024 * 1024 # https://github.com/StackExchange/StackExchange.Redis/issues/201
@conn
async def _clear(self, namespace=None, _conn=None):
if namespace:
keys = await _conn.keys("{}:*".format(namespace))
async for keys_chunk in chunked(keys, max_multibulk_length - 1):
await _conn.delete(*keys_chunk)
else:
await _conn.flushdb()
return True
If the solution is ok, you can add it to the code or I will create a Pull Request.
Hey @AIGeneratedUsername, thanks for reporting this! I've opened an issue in aioredis because I think it's worth to fix it there directly. Let's wait some days to see if there is a positive answer and if not, we can patch it in aiocache directly :).
Sounds like this should be resolved since migrating to new redis library. Please test on master (or 0.12 once released) and reopen if still an issue.