Ziyang Chen

Results 2 comments of Ziyang Chen

我也遇到这样的问题,使用的模型是Internvl3.5-8B,我的vllm版本是0.8.5.post1。 推理代码如下: ``` from vllm import LLM, SamplingParams model = LLM(model="OpenGVLab/InternVL3_5-8B", trust_remote_code=True, # 加不加这行没影响 tensor_parallel_size=torch.cuda.device_count(), gpu_memory_utilization=0.9, seed=seed, limit_mm_per_prompt={"image": 6}) sampling_params = SamplingParams(temperature=0., max_tokens=8192, seed=seed) responses = model.chat(batch_messages, sampling_params, use_tqdm=False) ```...

@pixas 是的,vllm升级后就正常了