FastChat
FastChat copied to clipboard
Error serving baize-lora-7B
I'm trying to host my arena, but I want to do so with models that have been finetuned using QLoRA. To test if LoRA models worked, I thought I'd first try following the instructions for serving with web gui, and pick the project-baize/baize-lora-7b model, but it only has the adapter models saved, so when I run python -m fastchat.serve.model_worker --model-path project-baize/baize-lora-7B, it raises the following error:
OSError: project-baize/baize-lora-7B does not appear to have a file named config.json. Checkout 'https://huggingface.co/project-baize/baize-lora-7B/main' for available files.
Has anyone encountered this or know how to fix it?
I think you should use apply_lora.py to merge the adapter project-baize/baize-lora-7b to the base model llama-7b.
please use the merged weights directly https://huggingface.co/project-baize/baize-v2-7b