About "HTTPError: 404 Client Error" and "OSError: meta-llama/Llama-2-7b does not appear to have a file named config.json".
I encountered those errors when I was downloading Llama-2-7b from huggingface.
I have full permission for using Llama-2 models and also did huggingface-cli login.
I have the same problem, hope someone knows a solution for it
Could you check if the solutions in #394 or #593 help?
I downloaded the weights using the meta form. I was facing the same issue because the weights need to be converted to hf format. I converted the weights using the following code after seeing parts of it commented in their quickstart example.
!pip install llama-recipes transformers datasets accelerate sentencepiece protobuf==3.20 py7zr scipy peft bitsandbytes fire torch_tb_profiler ipywidgets
TRANSFORM="""`python -c "import transformers;print('/'.join(transformers.__file__.split('/')[:-1])+'/models/llama/convert_llama_weights_to_hf.py')"`"""
model_dir = './models'
model_size = '7Bf'
hf_model_dir = './hf_models/llama-2-7B-chat'
!python $TRANSFORM --input_dir $model_dir --model_size $model_size --output_dir `$hf_model_dir`
The model_dir is the path to the directory which the weights have been downloaded to, it should contain the tokenizer.model and the directories with the checkpoints like llama-2-7B. model_size configures for the specific model weights which is to be converted. It also checks for the weights in the subfolder of model_dir with name model_size. So I renamed the directories to the keywords available in the script. For example llama-2-7B-chat was renamed to 7Bf and llama-2-7B was renamed to 7B and so on. That got the code working in my case by using the hf_model_dir here as the model_id.
A few weeks ago, I ran it normally, I didn't need to convert to hf. But now I am encountering missing config.json error, so that's why I find it really strange.