TensorRT-LLM
TensorRT-LLM copied to clipboard
failed to convert Qwen1.5-32B-Chat
run the command as follow: python convert_checkpoint.py --model_dir /Qwen1.5-32B-Chat/ --dtype bfloat16 --output_dir /Qwen1.5-32B/trt_ckpts/bf16/1-gpu/
error:
the config.json for Qwen1.5-32B-Chat may cause this problem
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
ok~
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
nice, i'll try it.
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
I commented these three lines,but still have the same error
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
I commented these three lines,but still have the same error
“--qwen_type qwen2”, Do you set this?
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
I commented these three lines,but still have the same error
“--qwen_type qwen2”, Do you set this?
I set it, using
python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2
and it doesn't seem to work
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
I commented these three lines,but still have the same error
“--qwen_type qwen2”, Do you set this?
I set it, using
python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2and it doesn't seem to work
python3 convert_checkpoint.py \
--model_dir ./Qwen1.5-32B-Chat-GPTQ-Int4/ \
--output_dir ./tllm_checkpoint_1gpu_gptq/ \
--dtype float16 \
--use_weight_only \
--weight_only_precision int4_gptq \
--per_group \
--load_model_on_cpu \
--qwen_type qwen2
This works for me
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
I got this error
it should be caused by the differences between GQA(32B) and MHA(others)
With this modification, the code can run normally
I commented these three lines,but still have the same error
“--qwen_type qwen2”, Do you set this?
I set it, using
python3 convert_checkpoint.py --model_dir /workspace/model/model/Qwen1.5-32B-Chat/ --output_dir /workspace/model/model/Qwen-32B-trt --dtype float16 --qwen_type qwen2and it doesn't seem to workpython3 convert_checkpoint.py \ --model_dir ./Qwen1.5-32B-Chat-GPTQ-Int4/ \ --output_dir ./tllm_checkpoint_1gpu_gptq/ \ --dtype float16 \ --use_weight_only \ --weight_only_precision int4_gptq \ --per_group \ --load_model_on_cpu \ --qwen_type qwen2This works for me
Thanks, it works for me in Qwen1.5-32B-Chat-GPTQ-Int4 too
This issue arises because the conversion of the non-quantized version of qwen1.5 is not implemented in "tensorrt_llm/models/qwen/convert.py" or "tensorrt_llm/models/qwen/model.py".
is this issue fixed now?
no
@AGI-player Thanks for your feedback. This is a known issue and we will fix it soon.
@jershi425 is this issue fixed in the latest version?
这是来自QQ邮箱的用户自动回复邮件。收到~
I got this error
With this modification, the code can run normally
With this modification, the code can run normally