TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

ImportError: cannot import name 'create_config_from_hugging_face' from 'tensorrt_llm.models.llama.convert' (/usr/local/lib/python3.10/dist-packages/tensorrt_llm/models/llama/convert.py)

Open sleepwalker2017 opened this issue 1 year ago • 8 comments

python convert_checkpoint.py --model_dir ${BASE_LLAMA_MODEL} \
                            --output_dir ./tllm_checkpoint_1gpu_lora_rank \
                            --dtype float16 \
                            --hf_lora_dir ${LORA_1} \
                            --max_lora_rank 32 \
                            --lora_target_modules "attn_q" "attn_k" "attn_v"

This is my code, I'm using [TensorRT-LLM] TensorRT-LLM version: 0.9.0.dev2024030500

root@bms-airtrunk-d-g18v3-app-10-192-82-3:/usr/local/lib/python3.10/dist-packages/tensorrt_llm#
grep -rn "create_config_from_hugging_face"

grep finds nothing.

sleepwalker2017 avatar Mar 13 '24 03:03 sleepwalker2017

Traceback (most recent call last): File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 14, in from tensorrt_llm.models.llama.convert import (create_config_from_hugging_face, ImportError: cannot import name 'create_config_from_hugging_face' from 'tensorrt_llm.models.llama.convert' (/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py)

yyrg020610 avatar Mar 13 '24 10:03 yyrg020610

cloned code: /data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py
from pypi: /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py

They are from different version of tensorrt-llm

try

cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/

tp-nan avatar Mar 14 '24 02:03 tp-nan

You can check the example version here and compare it with the version of tensorrt_llm package.

byshiue avatar Mar 14 '24 09:03 byshiue

Traceback (most recent call last): File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in main() File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 513, in main convert_and_save_hf(args) File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 433, in convert_and_save_hf llama = from_hugging_face( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1191, in from_hugging_face weights = load_weights_from_hf(config=config, File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1304, in load_weights_from_hf weights = convert_hf_llama( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 590, in convert_hf_llama q_weight = get_weight(model_params, prefix + 'self_attn.q_proj', dtype) File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 399, in get_weight return config[prefix + '.weight'].detach().cpu() NotImplementedError: Cannot copy out of meta tensor; no data!

How to deal with this, everyone

yyrg020610 avatar Mar 14 '24 10:03 yyrg020610

从 pypi 克隆的代码:/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py:/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py

它们来自不同版本的 tensorrt-llm

尝试

cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/

谢谢大神 这个问题解决了 继续往下面走步骤 又遇到了一个问题 Traceback (most recent call last): File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in main() File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 513, in main convert_and_save_hf(args) File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 433, in convert_and_save_hf llama = from_hugging_face( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1191, in from_hugging_face weights = load_weights_from_hf(config=config, File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1304, in load_weights_from_hf weights = convert_hf_llama( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 590, in convert_hf_llama q_weight = get_weight(model_params, prefix + 'self_attn.q_proj', dtype) File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 399, in get_weight return config[prefix + '.weight'].detach().cpu() NotImplementedError: Cannot copy out of meta tensor; no data!

yyrg020610 avatar Mar 14 '24 10:03 yyrg020610

You can check the example version here and compare it with the version of tensorrt_llm package.

Hi I checked it, it's 0.9.0.dev2024030500

sleepwalker2017 avatar Mar 15 '24 10:03 sleepwalker2017

Please install the requirements.txt in the llama example first:

pip install -r examples/llama/requirements.txt

QiJune avatar Mar 19 '24 03:03 QiJune

从 pypi 克隆的代码:/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py:/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py 它们来自不同版本的 tensorrt-llm

尝试

cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/

谢谢大神 这个问题解决了 继续往下面走步骤 又遇到了一个问题 Traceback (most recent call last): File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in main() File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 513, in main convert_and_save_hf(args) File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 433, in convert_and_save_hf llama = from_hugging_face( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1191, in from_hugging_face weights = load_weights_from_hf(config=config, File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1304, in load_weights_from_hf weights = convert_hf_llama( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 590, in convert_hf_llama q_weight = get_weight(model_params, prefix + 'self_attn.q_proj', dtype) File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 399, in get_weight return config[prefix + '.weight'].detach().cpu() NotImplementedError: Cannot copy out of meta tensor; no data!

try --load_model_on_cpu

sugar5727 avatar Apr 24 '24 07:04 sugar5727

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."

github-actions[bot] avatar Jun 08 '24 01:06 github-actions[bot]

It looks the issue is resolved. Close this issue.

byshiue avatar Jun 12 '24 03:06 byshiue