being able to run lua scripts inside pipeline (multi/exec)
Describe the bug Right now we have a limitation on running a lua script inside pipeline: See the steps to reproduce below. This limitation is a result of multi opening a transaction context, and then the in the same transaction, the call to "ScheduleInternal" function is done at the EXEC level (as a result of calling Invoke function), and then again when executing the script itself, that require a transaction protection. The issue arise because the code today do not support the nested calls, and this breaks the post condition for the ScheduleInternal, which expect the have "clean" transaction to work with. Prior to issue #457 this cause a crash, but the resolution of the issue just block any pipeline with "EVAL" in it run. Need to be able to resolve this issue:
- Not blocking EVAL inside MULTI
- Not crash because this is allowed.
- Being able to handle transactions to have "nested" calls.
To Reproduce 1.multi 2. EVAL "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second 3. At this point we may either get an error (if the version include the fix for #457 or later when running next step 4. exec 5. Without fix for #457 a crash, with it an error that we are in error state
It is expected that we would be able to complete the above successfully.
Environment (please complete the following information):
- OS: ubuntu 20.04
- Kernel: Linux dfly2 5.15.0-52-generic #58-Ubuntu SMP Thu Oct 13 08:03:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
- Bare Metal
- Dragonfly Version: 0.10.0
Reproducible Code Snippet
#!/usr/bin/env python3
import asyncio
import aioredis
import async_timeout
DB_INDEX = 1
async def run_pipeline_mode(pool, messages):
try:
conn = aioredis.Redis(connection_pool=pool)
pipe = conn.pipeline()
for key, val in messages.items():
pipe.set(key, val)
# pipe.get(key)
pipe.eval("return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}", 2, 'key1', 'key2', 'first', 'second')
result = await pipe.execute()
print(f"got result from the pipeline of {result} with len = {len(result)}")
if len(result) != len(messages):
return False, f"number of results from pipe {len(result)} != expected {len(messages)}"
elif False in result:
return False, "expecting to successfully get all result good, but some failed"
else:
return True, "all command processed successfully"
except Exception as e:
print(f"failed to run command: error: {e}")
return False, str(e)
def test_pipeline_support():
def generate(max):
for i in range(max):
yield f"key{i}", f"value={i}"
messages = {a: b for a, b in generate(1)}
loop = asyncio.new_event_loop()
async_pool = aioredis.ConnectionPool(host="localhost", port=6379,
db=DB_INDEX, decode_responses=True, max_connections=16)
success, message = loop.run_until_complete(
run_pipeline_mode(async_pool, messages))
#assert success, message
return success
if __name__ == "__main__":
print("starting the test")
state = test_pipeline_support()
print(f"finish successfully - {state}")
A new branch was created for this issue - issue-467. In this branch we have:
- A unit test in dragonfly_test.cc call MultiIssueNumber467. This would crash the unit test right now as a result of this bug. Once the issue is fixed, if would not crash any more
- A python test under fail_on_multi.py, that can be run (after starting dragonfly) and it would generate an error as well as long as this is not fixed. Please note that you would need to run "pip install -r requirements.tx
Note - do not sync branch issue-467 with main as main will have an a work around for this issue, that would cause it not to crash!
Fixed in 1.9.0