TensorRT-LLM
TensorRT-LLM copied to clipboard
ImportError: cannot import name 'create_config_from_hugging_face' from 'tensorrt_llm.models.llama.convert' (/usr/local/lib/python3.10/dist-packages/tensorrt_llm/models/llama/convert.py)
python convert_checkpoint.py --model_dir ${BASE_LLAMA_MODEL} \
--output_dir ./tllm_checkpoint_1gpu_lora_rank \
--dtype float16 \
--hf_lora_dir ${LORA_1} \
--max_lora_rank 32 \
--lora_target_modules "attn_q" "attn_k" "attn_v"
This is my code, I'm using [TensorRT-LLM] TensorRT-LLM version: 0.9.0.dev2024030500
root@bms-airtrunk-d-g18v3-app-10-192-82-3:/usr/local/lib/python3.10/dist-packages/tensorrt_llm#
grep -rn "create_config_from_hugging_face"
grep finds nothing.
Traceback (most recent call last):
File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 14, in
cloned code: /data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py
from pypi: /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py
They are from different version of tensorrt-llm
try
cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/
You can check the example version here and compare it with the version of tensorrt_llm package.
Traceback (most recent call last):
File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in
How to deal with this, everyone
从 pypi 克隆的代码:/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py:/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py
它们来自不同版本的 tensorrt-llm
尝试
cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/
谢谢大神 这个问题解决了 继续往下面走步骤 又遇到了一个问题
Traceback (most recent call last):
File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in
You can check the example version here and compare it with the version of tensorrt_llm package.
Hi I checked it, it's 0.9.0.dev2024030500
Please install the requirements.txt in the llama example first:
pip install -r examples/llama/requirements.txt
从 pypi 克隆的代码:/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py:/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py 它们来自不同版本的 tensorrt-llm
尝试
cp -r /data/yyrg_projects/TensorRT-LLM/tensorrt_llm/* /var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/谢谢大神 这个问题解决了 继续往下面走步骤 又遇到了一个问题 Traceback (most recent call last): File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 521, in main() File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 513, in main convert_and_save_hf(args) File "/data/yyrg_projects/TensorRT-LLM/examples/llama/convert_checkpoint.py", line 433, in convert_and_save_hf llama = from_hugging_face( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1191, in from_hugging_face weights = load_weights_from_hf(config=config, File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 1304, in load_weights_from_hf weights = convert_hf_llama( File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 590, in convert_hf_llama q_weight = get_weight(model_params, prefix + 'self_attn.q_proj', dtype) File "/var/lib/g02u1/.local/lib/python3.10/site-packages/tensorrt_llm/models/llama/convert.py", line 399, in get_weight return config[prefix + '.weight'].detach().cpu() NotImplementedError: Cannot copy out of meta tensor; no data!
try --load_model_on_cpu
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 15 days."
It looks the issue is resolved. Close this issue.