vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Usage]: How to use pipeline parallelism in offline inference?

Open yingtongxiong opened this issue 9 months ago • 9 comments

Your current environment

Hi I want to know how to use pipeline parallelism in offline inference? Can anyone give a concrete example about how to use pipeline? Looking forward to the reply

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

yingtongxiong avatar Feb 18 '25 03:02 yingtongxiong

Minimal code snippet:

from vllm import LLM
llm = LLM(
    model=YOUR_MODEL_PATH,
    pipeline_parallel_size=2,
)

jeejeelee avatar Feb 18 '25 04:02 jeejeelee

Minimal code snippet:

from vllm import LLM llm = LLM( model=YOUR_MODEL_PATH, pipeline_parallel_size=2, )

@jeejeelee Thank you. You mean this? I didn't see the pipeline_parallel_size in init's parameters.

yingtongxiong avatar Feb 18 '25 04:02 yingtongxiong

See: https://github.com/vllm-project/vllm/blob/main/vllm/engine/arg_utils.py#L112

jeejeelee avatar Feb 18 '25 04:02 jeejeelee

See: https://github.com/vllm-project/vllm/blob/main/vllm/engine/arg_utils.py#L112

ok Thank you very much, I will have a try

yingtongxiong avatar Feb 18 '25 04:02 yingtongxiong

@jeejeelee I have tried to use pp in LLM API, however, I met the problem raise NotImplementedError( NotImplementedError: Pipeline parallelism is only supported through AsyncLLMEngine as performance will be severely degraded otherwise. I think pp should use the async engine, and do you know how to use async engine?

yingtongxiong avatar Feb 18 '25 07:02 yingtongxiong

Try:

vllm serve  YOUR_MODEL_PATH --pipeline-parallel-size 2

jeejeelee avatar Feb 18 '25 08:02 jeejeelee

Thank you. I am new to vllm, Is this online inference?

yingtongxiong avatar Feb 18 '25 08:02 yingtongxiong

Thank you. I am new to vllm, Is this online inference?

Yes

jeejeelee avatar Feb 18 '25 10:02 jeejeelee

So, how about offline inference? can async engine be used in offline inference?

yingtongxiong avatar Feb 18 '25 10:02 yingtongxiong

So, how about offline inference? can async engine be used in offline inference?

I think it cannot be

jeejeelee avatar Feb 19 '25 06:02 jeejeelee

@jeejeelee Okay,thank you

yingtongxiong avatar Feb 19 '25 06:02 yingtongxiong

@jeejeelee Hi, when I set tp_size > 8 in offline inference, I met nccl errors. Does vllm support tp_size > 8 in multi-node env?

yingtongxiong avatar Feb 20 '25 02:02 yingtongxiong

Hi, any progress? I'm new to vLLM and also want to offline inference a model using pipeline parallelism.

cbx6664 avatar Mar 04 '25 08:03 cbx6664

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] avatar Jun 03 '25 02:06 github-actions[bot]

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

github-actions[bot] avatar Jul 03 '25 02:07 github-actions[bot]