FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
When comparing a reasoning model with a non-reasoning model, disable streaming, and post both responses to the user at the same time after both of the models are done to...
Is there any documentation for which system prompts are used by which models in the arena? I'm interested in deepseek-r1 for example - I searched this repo and couldn't find...
Are there plans to support DeepSeek-R1-Distill-Qwen models?Are there plans to support DeepSeek-R1-Distill-Qwen models? Now Fastchat is having problems loading DeepSeek- R1-Distil-Qwen series model answers
That tool doesn't provide much logs, so heres that
openchat_3.5 seems to be using the default conversation template vs the openchat_3.5 specific template. Log: fastchat-model-worker-1 |INFO 11-09 19:37:00 async_llm_engine.py:371] Received request a0943f1021f24c3e94e312724ec364dd: prompt: "A chat between a curious human...
My startup command is ``` python -m fastchat.serve.vllm_worker --model-path TheBloke/Nous-Capybara-34B-AWQ --trust-remote-code --tensor-parallel-size 2 --quantization awq --max-model-len 8192 --conv-template manticore ``` But I got the following output ``` "A chat between...
I'm currently using the OS model [functionary](https://github.com/MeetKai/functionary), which supports `functions` in a manner similar to how GPT operates through the OpenAI API. I've successfully deployed the model worker and proceeded...
`GitLab link/issue/MR:` https://gitlab.com/gitlab-org/gitlab `Type your query:` Where is the code for this endpoint `GET /projects/:id/repository/commits/:sha` ? Gives 