[Bug]: Using Ray with compiled DAG throws the "The compiled graph can't have more than 10 in-flight executions" error
Your current environment
The output of `python collect_env.py`
Your output of `python collect_env.py` here
🐛 Describe the bug
I'm trying to run DeepSeek R1 using vLLM with Ray and aDAG. As soon as I send the first request I get the following error:
File "/home/ray/anaconda3/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 825, in run_engine_loop
result = task.result()
^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 748, in engine_step
request_outputs = await self.engine.step_async(virtual_engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 353, in step_async
outputs = await self.model_executor.execute_model_async(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/vllm/executor/ray_distributed_executor.py", line 575, in execute_model_async
dag_future = await self.forward_dag.execute_async(serialized_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/dag/compiled_dag_node.py", line 2186, in execute_async
self._raise_if_too_many_inflight_executions()
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/dag/compiled_dag_node.py", line 1918, in _raise_if_too_many_inflight_executions
raise ray.exceptions.RayCgraphCapacityExceeded(
ray.exceptions.RayCgraphCapacityExceeded: System error: The compiled graph can't have more than 10 in-flight executions, and you currently have 10 in-flight executions. Retrieve an output using ray.get before submitting more requests or increase `_max_inflight_executions`. `dag.experimental_compile(_max_inflight_executions=...)`
The Ray version that I'm using is 2.41.0.
Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
I'm running into the same error with 2.42.0.
Hi, this is a known issue in ray 2.41 which we are fixing. Will fix when new ray version is released.
Please downgrade to ray 2.40 for a short-term workaround.
This has been fixed in https://github.com/vllm-project/vllm/pull/13994
Hi, I've been experiencing the exact same issue with Ray 2.46, any idea why this could be?
hmm, the issue was fixed in ray 2.43, and I have not heard about it occurring again. Could you provide a repro script and the exact stack trace? @oelhammouchi
Hi, I've been experiencing the exact same issue with Ray 2.46, any idea why this could be?
Hi do you fix this? @oelhammouchi
I have same problem with vllm 0.10.1.1 + Ray 2.49/2.51 , running pp = 16