ms-swift icon indicating copy to clipboard operation
ms-swift copied to clipboard

swift微调Qwen2.5-VL,合并模型后,分别使用swift和transform推理结果不一致

Open an1018 opened this issue 8 months ago • 3 comments

使用swift微调多模态大模型,合并模型后,分别使用swift和transform推理结果不一致,想问下原因是什么

min和max pixels、超参数设置都一样;swift prompt包含了,transform prompt未包含

  1. swift代码
model = 'checkpoint-540-merged'
model, tokenizer = get_model_tokenizer(model, torch.bfloat16, attn_impl='flash_attn')
template_type = None
template_type = template_type or model.model_meta.template
template = get_template(template_type, tokenizer, default_system=system_prompt)
engine = PtEngine.from_model_template(model, template, max_batch_size=1)
request_config = RequestConfig(max_tokens=1024, temperature=0.1, top_p=0.001, top_k=1,repetition_penalty=1.05)
infer_requests = [
    InferRequest(messages=[{'role': 'user', 'content': prompt_use}],
                 images=data["images"]),
]
resp_list = engine.infer(infer_requests, request_config)
  1. transformer代码
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
       self.qwen_model_path,
       torch_dtype=torch.bfloat16,
       attn_implementation="flash_attention_2",
       device_map=self.DEVICE
)

min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(model_path, min_pixels=min_pixels, max_pixels=max_pixels)

messages = [
    {"role": "system", "content": system_prompt},
    {
        "role": "user",
        "content": content,
    }
]


# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True, add_vision_id=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)

generated_ids = model.generate(**inputs, max_new_tokens=2048, temperature=0.1, top_p=0.001, top_k=1, repetition_penalty=1.05)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=True
)

an1018 avatar Apr 27 '25 06:04 an1018

设置temperature=0do_sample=False试试。

slin000111 avatar Apr 28 '25 09:04 slin000111

还是不一样,请问还有哪里可能设置的不同

an1018 avatar Apr 29 '25 06:04 an1018

还是不一样,请问还有哪里可能设置的不同 swift代码设置了temperature=0,transformer代码去掉temperature=0.1设置了do_sample=False,测试merge后的模型,两个推理结果一样的。看下self.qwen_model_path是不是上面的checkpoint-540-merged。

slin000111 avatar Apr 29 '25 07:04 slin000111

This issue has been inactive for over 3 months and will be automatically closed in 7 days. If this issue is still relevant, please reply to this message.

github-actions[bot] avatar Jul 29 '25 00:07 github-actions[bot]

我遇到同样的问题,请问是否解决了?

Asunatan avatar Nov 17 '25 02:11 Asunatan