vllm
vllm copied to clipboard
[Doc]: Failed to download lora adapter using the path from documentation
📚 The doc issue
https://docs.vllm.ai/en/latest/models/lora.html describe the steps to load a lora model.
python -m vllm.entrypoints.openai.api_server \
--model meta-llama/Llama-2-7b-hf \
--enable-lora \
--lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/
There're two issues
- The model path is incorrect. We should append
snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c
~is not expanded automatically and it fails to load the model, at this moment, relative path is not supported.
Screenshots
-
Path documented
-
Update the path with appended snapshot commit id
-
Update to absolute path
Suggest a potential alternative/fix
- append commit it
snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c - change
~to$HOME
I will submit a PR for short-term fix and separate PR to support ~ and dynamic loading from model registery.