llama icon indicating copy to clipboard operation
llama copied to clipboard

why converted checkpoints is failure

Open tianke0711 opened this issue 1 year ago • 3 comments

I used the Colab T4 to fine tuning the model, firstly I need to use the following code to converted checkpoints: !python /content/convert_llama_weights_to_hf.py
--input_dir /content/llama/llama-2-7b --model_size 7B --output_dir /content/llama/models_hf/7B

But it is failed:

2023-09-13 13:49:57.146475: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Fetching all parameters from the checkpoint at /content/llama/llama-2-7b. ^C q

tianke0711 avatar Sep 13 '23 14:09 tianke0711

Have you tried this?

jeffxtang avatar Oct 11 '23 23:10 jeffxtang

same problem

jiaochunyu avatar Dec 19 '23 03:12 jiaochunyu

Hi! Please check getting-the-meta-llama-models section for more info!

wukaixingxp avatar May 31 '24 16:05 wukaixingxp