FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
We have an API-based LLM model with a custom protocol and want to get the rank among other models in the WebUI leaderboard. Are there any other steps after we...
2023-08-24 11:10:06 | ERROR | stderr | Process model_worker(27196): 2023-08-24 11:10:06 | ERROR | stderr | Traceback (most recent call last): 2023-08-24 11:10:06 | ERROR | stderr | File "D:\env\miniconda3\envs\langchain-ChatGLM\lib\multiprocessing\process.py",...
## Why are these changes needed? Logprobs support with the OpanAI API was on the to do list in the docs. Is now supported by both completions and chat completions...
Use the Python openai to call the FastChat API + baichuan2-13B, then I received the "TypeError: string indices must be integers" error The full error message: """ Traceback (most recent...
## Why are these changes needed? Add support for glm-4 ## Related issue number (if applicable) Closes #3395 ## Checks - [X] I've run `format.sh` to lint the changes in...
model: https://huggingface.co/THUDM/chatglm3-6b-32k run fastchat:  after add some debug info, i think this code not working  
Currently GLM-4-0520 is available on the leaderboard and performs really well. However, Zhipu AI also has other variants available, which are 10x, 100x and 1000x as cheap. It would be...
https://www.anthropic.com/news/claude-3-5-sonnet
I am looking for a mean to use GPTCache in FastChat to speed up the LLM processes. Any pointer?