Exception occurred when using lora-grpo
[tokenizer.py:281] No tokenizer found in /simon-stub-path, using base model tokenizer instead. (Exception: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/simon-stub-path'.) Is it a bug?
[tokenizer.py:281] No tokenizer found in /simon-stub-path, using base model tokenizer instead. (Exception: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/simon-stub-path'.) Is it a bug?
Hi ,not bug. It's expected. The LoRA tensors are directly loaded from the GPU memory, so the path does not have any practical significance. A "dummy value" or "stub value" is filled in here, simply to prevent the path in vllm from triggering a non-None assertion. The tokenizer is used from the base model, and there is no need to load it additionally from the LoRA path.
[tokenizer.py:281] No tokenizer found in /simon-stub-path, using base model tokenizer instead. (Exception: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/simon-stub-path'.) Is it a bug?
Hi ,not bug. It's expected. The LoRA tensors are directly loaded from the GPU memory, so the path does not have any practical significance. A "dummy value" or "stub value" is filled in here, simply to prevent the path in vllm from triggering a non-None assertion. The tokenizer is used from the base model, and there is no need to load it additionally from the LoRA path.
Got it! Thanks for your reply!
why Downloading Model to directory: /mnt/workspace/.cache/modelscope/simon-stub-path ?