llama
llama copied to clipboard
why converted checkpoints is failure
I used the Colab T4 to fine tuning the model, firstly I need to use the following code to converted checkpoints:
!python /content/convert_llama_weights_to_hf.py
--input_dir /content/llama/llama-2-7b --model_size 7B --output_dir /content/llama/models_hf/7B
But it is failed:
2023-09-13 13:49:57.146475: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False
. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Fetching all parameters from the checkpoint at /content/llama/llama-2-7b.
^C
Have you tried this?
same problem
Hi! Please check getting-the-meta-llama-models section for more info!