FastChat
FastChat copied to clipboard
an issue for repeat a command
root@c68c31f45482:/workspace/zt/code/FastChat# python3 -m fastchat.model.apply_delta --base-model-path ../../model/Llama-2-7b-hf --target-model-path ../Sequence-Scheduling/ckpts/vicuna-7b --delta-path lmsys/vicuna-7b-delta-v1.1
Loading the delta weights from lmsys/vicuna-7b-delta-v1.1
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
I just want to add Vicuna weights to original llama model. The first time i excute the command described in tutorial is normal. But I find i set a wrong output path. So i just cancel this command and run it again. Unfortunately it shows this description, and i have waited more than 30 mins to this download process. So what's wrong with this bug, and how can i fix it. (I have VPN so it's not the network conncetion issue)