vllm icon indicating copy to clipboard operation
vllm copied to clipboard

4块4090部署推理性能问题

Open lxb0425 opened this issue 7 months ago • 0 comments

Your current environment

python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --max-model-len 8192 --served-model-name chat-v2.0 --model /workspace/chat-v2.0 --enforce-eager --tensor-parallel-size 4

我使用4 4090部署微调后的72b-int4位 响应很慢要十几秒 这是什么原因啊 使用1 A100响应还可以 还发现使用2张4090也能跑起来比4张卡块但是一段时间内没有响应

image

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

lxb0425 avatar Jul 10 '24 07:07 lxb0425