TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

failed to convert Qwen1.5-32B-Chat

Open AGI-player opened this issue 1 year ago • 14 comments

run the command as follow: python convert_checkpoint.py --model_dir /Qwen1.5-32B-Chat/ --dtype bfloat16 --output_dir /Qwen1.5-32B/trt_ckpts/bf16/1-gpu/

error: image

the config.json for Qwen1.5-32B-Chat may cause this problem image

AGI-player avatar Apr 28 '24 07:04 AGI-player

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

jershi425 avatar May 03 '24 10:05 jershi425

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

ok~

AGI-player avatar May 06 '24 02:05 AGI-player

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

Hukongtao avatar May 20 '24 05:05 Hukongtao

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

AGI-player avatar May 20 '24 06:05 AGI-player

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

Hukongtao avatar May 20 '24 12:05 Hukongtao

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

nice, i'll try it.

AGI-player avatar May 21 '24 00:05 AGI-player

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

I commented these three lines,but still have the same error image

shilijunConnan avatar May 21 '24 03:05 shilijunConnan

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

I commented these three lines,but still have the same error image

“--qwen_type qwen2”, Do you set this?

Hukongtao avatar May 21 '24 03:05 Hukongtao

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

I commented these three lines,but still have the same error image

“--qwen_type qwen2”, Do you set this?

I set it, using python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2 and it doesn't seem to work

shilijunConnan avatar May 21 '24 04:05 shilijunConnan

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

I commented these three lines,but still have the same error image

“--qwen_type qwen2”, Do you set this?

I set it, using python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2 and it doesn't seem to work

python3 convert_checkpoint.py \
    --model_dir         ./Qwen1.5-32B-Chat-GPTQ-Int4/ \
    --output_dir        ./tllm_checkpoint_1gpu_gptq/ \
    --dtype float16 \
    --use_weight_only \
    --weight_only_precision int4_gptq \
    --per_group \
    --load_model_on_cpu \
    --qwen_type qwen2

This works for me

Hukongtao avatar May 21 '24 05:05 Hukongtao

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

image I got this error

it should be caused by the differences between GQA(32B) and MHA(others)

image With this modification, the code can run normally

I commented these three lines,but still have the same error image

“--qwen_type qwen2”, Do you set this?

I set it, using python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2 and it doesn't seem to work

python3 convert_checkpoint.py \
    --model_dir         ./Qwen1.5-32B-Chat-GPTQ-Int4/ \
    --output_dir        ./tllm_checkpoint_1gpu_gptq/ \
    --dtype float16 \
    --use_weight_only \
    --weight_only_precision int4_gptq \
    --per_group \
    --load_model_on_cpu \
    --qwen_type qwen2

This works for me

Thanks, it works for me in Qwen1.5-32B-Chat-GPTQ-Int4 too

shilijunConnan avatar May 21 '24 09:05 shilijunConnan

This issue arises because the conversion of the non-quantized version of qwen1.5 is not implemented in "tensorrt_llm/models/qwen/convert.py" or "tensorrt_llm/models/qwen/model.py".

KnightLancelot avatar May 23 '24 09:05 KnightLancelot

is this issue fixed now?

Fred-cell avatar May 24 '24 13:05 Fred-cell

no

KnightLancelot avatar May 27 '24 07:05 KnightLancelot

@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.

@jershi425 is this issue fixed in the latest version?

AGI-player avatar Jun 21 '24 06:06 AGI-player

这是来自QQ邮箱的用户自动回复邮件。收到~

shilijunConnan avatar Jun 21 '24 06:06 shilijunConnan