燃
燃
vLLM 启动参数和 openai 的请求脚本可以贴一下吗? 我复现下问题
1. 一般是不需要进行预处理的,目前 vLLM 会调用 transformer 的 image_processor 做处理,处理逻辑详见[这里](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L169); 2. 可以对齐一下请求参数,详见:#1125
> 您好, [@wulipc](https://github.com/wulipc) 我按照您给的链接尝试修改了参数,但是结果依旧不理想。 > > ### 1. 修改vllm/vllm/entrypoints/openai/protocol.py中参数初始值,如图所示,其中"max_new_tokens"和"do_sample"没有找到: >  > > ### 2. 启动服务: > * python -m vllm.entrypoints.openai.api_server --model /data/dyc/model/Qwen2.5-VL-7B-Instruct/ --dtype bfloat16 --tensor-parallel-size 1 --limit-mm-per-prompt image=2,video=1 --max-model-len...
@linchen111 If your issue has been resolved, please close the issue.
Hi, thanks for your interest in the Qwen model! This warning appears during the VLLM profile_run. In the original code, we added +1 to the video's `num_frames` in the dummy_data...
> +1, same problem This issue has been fixed in the latest version of vLLM. You can try updating it.
@vefalun If your issue has been resolved, please close the issue.
> `{'image': 16384, 'video': 114688}` Why does this image correspond to a context length of 16384? [@wulipc](https://github.com/wulipc) The `image_processor.max_pixels` is set to 12845056, which equals 16384 x 28 x 28;...
Could you provide me with a minimal reproducible code? I will carefully take a look at the issue.
@ZhouJYu I tested locally and didn't encounter any errors; the results came out normally. Have you tried running the cases mentioned in the README? ``` 这张图片显示了一辆停放在户外的地砖铺成的地面上的两轮机动车,摩托车旁边有一个软行李箱。摩托车被放置得靠得很近,触碰到了一个黄色的路面标记。地面的清洗条地上绘制了白色编号“106”,表示它作为停放位置的指南。周围有一些绿色的花朵和灌木丛,是人们日常活动频繁的地方,可能是住宅区。背景中有一个建筑物的一部分,可能是住宅大厦。 ```