FastChat
FastChat copied to clipboard
Unable to launch the OpenAI API [Vicuna-7B]. Error log: Using pad_token, but it is not set yet.
Would you suggest me some ways to debug that? By the way, this model can be successfully inferenced with fastchat cli.
Error Log:
2023-06-13 15:19:44 | INFO | model_worker | Loading the model vicuna-7b on worker 005f53 ...
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|███████████████████████████████████████████████████████�
�████████████▌ | 1/2 [00:08<00:08, 8.92s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████�
�██████████████████████████████████████████████████████████████████
███████████████| 2/2 [00:12<00:00, 5.54s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████�
�██████████████████████████████████████████████████████████████████
███████████████| 2/2 [00:12<00:00, 6.05s/it]
2023-06-13 15:19:57 | ERROR | stderr |
Using pad_token, but it is not set yet.
2023-06-13 15:20:03 | INFO | model_worker | Register to controller
2023-06-13 15:20:03 | ERROR | stderr | ╭─────────────────────────────── Traceback (most recent call last) ─────────�
�──────────────────────╮
2023-06-13 15:20:03 | ERROR | stderr | │ /mnt/lustre/duanhaodong/anaconda3/envs/mm2/lib/python3.8/runpy.py:194 in _run_module_as_main │
2023-06-13 15:20:03 | ERROR | stderr | │ │
2023-06-13 15:20:03 | ERROR | stderr | │ 191 │ main_globals = sys.modules["main"].dict │
2023-06-13 15:20:03 | ERROR | stderr | │ 192 │ if alter_argv: │
2023-06-13 15:20:03 | ERROR | stderr | │ 193 │ │ sys.argv[0] = mod_spec.origin │
2023-06-13 15:20:03 | ERROR | stderr | │ ❱ 194 │ return _run_code(code, main_globals, None, │
2023-06-13 15:20:03 | ERROR | stderr | │ 195 │ │ │ │ │ "main", mod_spec) │
2023-06-13 15:20:03 | ERROR | stderr | │ 196 │
2023-06-13 15:20:03 | ERROR | stderr | │ 197 def run_module(mod_name, init_globals=None, │
2023-06-13 15:20:03 | ERROR | stderr | │ │
2023-06-13 15:20:03 | ERROR | stderr | │ /mnt/lustre/duanhaodong/anaconda3/envs/mm2/lib/python3.8/runpy.py:87 in _run_code │
2023-06-13 15:20:03 | ERROR | stderr | │ │
2023-06-13 15:20:03 | ERROR | stderr | │ 84 │ │ │ │ │ loader = loader, │
2023-06-13 15:20:03 | ERROR | stderr | │ 85 │ │ │ │ │ package = pkg_name, │
2023-06-13 15:20:03 | ERROR | stderr | │ 86 │ │ │ │ │ spec = mod_spec) │
2023-06-13 15:20:03 | ERROR | stderr | │ ❱ 87 │ exec(code, run_globals) │
2023-06-13 15:20:03 | ERROR | stderr | │ 88 │ return run_globals │
2023-06-13 15:20:03 | ERROR | stderr | │ 89 │
2023-06-13 15:20:03 | ERROR | stderr | │ 90 def _run_module_code(code, init_globals=None, │
2023-06-13 15:20:03 | ERROR | stderr | │ │
2023-06-13 15:20:03 | ERROR | stderr | │ /mnt/lustre/duanhaodong/anaconda3/envs/mm2/lib/python3.8/site-packages/fastchat/serve/model_work │
2023-06-13 15:20:03 | ERROR | stderr | │ er.py:414 in
@kennymckormick please download v1.1 weight here The old weight had no eos_token
@kennymckormick please download v1.1 weight here The old weight had no eos_token
I have the same question even though I download v1.1 weight.
I have the same question even though I download v1.1 weight.
v1.1 weight had changed several times, and you can remove your weight and download it again
@Haifengtao @kennymckormick I encountered the same issue. Could you please tell me how you resolved it? Thank you! :)
I'm going to try with the latest v1.1 weight and update the result once ready. One more thing to add, when trying with the v1.3 weight, I still get an error: in register_to_controller, I got a 503 error for the post request.
@kennymckormick Thank you for your prompt reply. After replacing all localhost
with my real ip, I could successfully run the server and the controller and connect to them using requests (wiht postman). However, I am still encountering some issues when using the OpenAI API.
@Jacob-yen what is the issue you see?