OSError
python gen_model_answer_baseline.py --model-path /data/transformers/vicuna-7b-v1.3 --model-id vicuna-7b-v1.3-0
python gen_model_answer_medusa.py --model-path /data/transformers/medusa_vicuna-7b-v1.3 --model-id medusa-vicuna-7b-v1.3-0
My vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
My medusa-vicuna-7b-v1.3 download comes from:https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3/tree/main
I used this command to add the local model, and then an error was reported.how can I fixed it?
Thanks for your interest! It seems to be a network issue and may be due to the GFW. Could you please check if that's the case?
Thank you for your reply!I have fixed that problem.Can you take a look at the question I asked here?:https://github.com/FasterDecoding/Medusa/issues/45
Sorry, I haven't tried llama-chat yet, but you may find our new training env https://github.com/ctlllll/axolotl helpful. You can refer to the configs and start training with commands like accelerate launch -m axolotl.cli.train examples/medusa/your_config.yml.