Chinese-LLaMA-Alpaca icon indicating copy to clipboard operation
Chinese-LLaMA-Alpaca copied to clipboard

text-generation-webui部署时报错Can't find config.json at 'loras\['chinese-alpaca-lora-13b']'

Open qingying6 opened this issue 1 year ago • 4 comments

loras里本来就没有config.json文件怎么会报这个错呢?? (llama.cpp-master) d:\tools\text-generation-webui>python server.py --model llama_13b_hf --lora chinese-alpaca-lora-13b --gpu-memory 6 Gradio HTTP request redirected to localhost :) Loading llama_13b_hf... Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 3/3 [03:36<00:00, 72.13s/it] Loaded the model in 218.80 seconds. Applying the following LoRAs to llama_13b_hf: chinese-alpaca-lora-13b ---------['chinese-alpaca-lora-13b']----------- Traceback (most recent call last): File "D:\python3\llama.cpp-master\lib\site-packages\peft\utils\config.py", line 99, in from_pretrained config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME) File "D:\python3\llama.cpp-master\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "D:\python3\llama.cpp-master\lib\site-packages\huggingface_hub\utils_validators.py", line 157, in validate_repo_id raise HFValidationError(f"Repo id must be a string, not {type(repo_id)}: '{repo_id}'.") huggingface_hub.utils._validators.HFValidationError: Repo id must be a string, not <class 'pathlib.WindowsPath'>: 'loras['chinese-alpaca-lora-13b']'.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "d:\tools\text-generation-webui\server.py", line 919, in add_lora_to_model([shared.args.lora]) File "d:\tools\text-generation-webui\modules\LoRA.py", line 44, in add_lora_to_model shared.model = PeftModel.from_pretrained(shared.model, Path(f"{shared.args.lora_dir}/{lora_names}"), **params) File "D:\python3\llama.cpp-master\lib\site-packages\peft\peft_model.py", line 135, in from_pretrained config = PEFT_TYPE_TO_CONFIG_MAPPING[PeftConfig.from_pretrained(model_id).peft_type].from_pretrained(model_id) File "D:\python3\llama.cpp-master\lib\site-packages\peft\utils\config.py", line 101, in from_pretrained raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'") ValueError: Can't find config.json at 'loras['chinese-alpaca-lora-13b']'

qingying6 avatar Apr 20 '23 04:04 qingying6

同问

lucoo01 avatar Apr 20 '23 09:04 lucoo01

peft版本是多少?

iMountTai avatar Apr 22 '23 14:04 iMountTai

后面我合并模型后再次运行就可以了

lucoo01 avatar Apr 23 '23 01:04 lucoo01

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] avatar May 01 '23 00:05 github-actions[bot]

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

github-actions[bot] avatar May 10 '23 00:05 github-actions[bot]