FastChat icon indicating copy to clipboard operation
FastChat copied to clipboard

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Results 766 FastChat issues
Sort by recently updated
recently updated
newest added

Currently the speed is extremly slow, is there a way to provide multi threading using all nproc num for mapping Just like datasets ?

when launch the model worker(s): `python3 -m fastchat.serve.model_worker --model-name 'vicuna-7b-v1.1' --model-path /path/to/vicuna/weights ` it has error as follows: `2023-05-30 17:19:25 | ERROR | stderr | ConnectionError: HTTPConnectionPool(host='localhost', port=21001): Max retries...

please, when error, append the message - ex: NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE. (error_code: 4) - not replace the content with the message...

enhancement

First, thank you for this great library. It is working well and the basic are working. We are trying now to use it with langchain and replace openai API with...

- I am trying to generate with zero shot prompts in a local gpu setup. - But with the same parameters as shown in the demo portal (i.e. temp=0.7, top_p=1.0),...

According to the discussion on transformers, there's a fix for FastChat: https://github.com/huggingface/transformers/issues/17756#issuecomment-1573319214 ```patch diff --git a/fastchat/model/model_adapter.py b/fastchat/model/model_adapter.py index facfbee..c1b6d35 100644 --- a/fastchat/model/model_adapter.py +++ b/fastchat/model/model_adapter.py @@ -43,7 +43,7 @@ class BaseAdapter:...

lmsys.org states that FastChat-T5 supports a context size of 4K. How do I get it to work? I get an error as soon as I go above 2K.

documentation

It seems like when I have different workers with different models, I still only see one of them. Like here, I have a worker on port 21002 and one worker...

ERROR: [Errno 99] error while attempting to bind on address ('::1', 21001, 0, 0): cannot assign requested address

## Why are these changes needed? This PR adds support for Baichuan 7B model [GitHub](https://github.com/baichuan-inc/baichuan-7B) [HuggingFace](https://huggingface.co/baichuan-inc/baichuan-7B) ## Related issue number (if applicable) ## Checks - [x] I've run `format.sh` to...