vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Doc]: Failed to download lora adapter using the path from documentation

Open Jeffwan opened this issue 1 year ago • 1 comments

📚 The doc issue

https://docs.vllm.ai/en/latest/models/lora.html describe the steps to load a lora model.

python -m vllm.entrypoints.openai.api_server \
    --model meta-llama/Llama-2-7b-hf \
    --enable-lora \
    --lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-sql-lora-test/

There're two issues

  1. The model path is incorrect. We should append snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c

image

  1. ~ is not expanded automatically and it fails to load the model, at this moment, relative path is not supported.

Screenshots

  1. Path documented image

  2. Update the path with appended snapshot commit id image

  3. Update to absolute path image

Suggest a potential alternative/fix

  1. append commit it snapshots/0dfa347e8877a4d4ed19ee56c140fa518470028c
  2. change ~ to $HOME

Jeffwan avatar Jul 08 '24 23:07 Jeffwan

I will submit a PR for short-term fix and separate PR to support ~ and dynamic loading from model registery.

Jeffwan avatar Jul 08 '24 23:07 Jeffwan