FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Address https://github.com/lm-sys/FastChat/issues/847 Test procedure: 1. Start controller, model worker1, model worker2 and web, model list does show 2 models 2. Kill model worker2 and check dropdown list again and we...
See documentation [docs/github_action.md](https://github.com/lm-sys/FastChat/pull/832/commits/7a585596fd9c9a76df8173867edcdb12507e5a07#diff-bcdbae1b5d7b0ce752120aac1de47145638b6ab6edbb4290ad7ac08aa14c543b) for details
I want to make the model output more, I see that the maximum length is limited to 2048, so how do I break this limit, it bothers me very much,...
I've been using a modified AutoGPT which can define a custom openai_base_url. (The repo is https://github.com/DGdev91/Auto-GPT). However, when I set the base url to localhost:8000, which is the same as...
I am trying to run the api just with cpu but getting error chatgpt-fastchat-model-worker-1 | 2023-05-05 10:11:46 | ERROR | stderr | │ 1145 │ │ return self._apply(convert) │ chatgpt-fastchat-model-worker-1...
Download the latest source code, runing the command : "python -m fastchat.serve.cli --model-path "lmsys/vicuna-13b-delta-v1.1" --load-8bit" error occurs: init_kwargs {'torch_dtype': torch.float16} 0it [00:00, ?it/s] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮...
TypeError: forward() got an unexpected keyword argument 'position_ids' the huggingface/transformers is the the latest main branch in github
I notice we do plan to support reload mode, https://github.com/lm-sys/FastChat/blob/a94fd259a97128f7f4483ddb760690f467888d84/fastchat/serve/gradio_web_server.py#L533-L535 Seems this is not implemented yet https://github.com/lm-sys/FastChat/blob/a94fd259a97128f7f4483ddb760690f467888d84/fastchat/serve/gradio_web_server.py#L506-L522 I will add a PR to support this feature