FastChat icon indicating copy to clipboard operation
FastChat copied to clipboard

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Results 766 FastChat issues
Sort by recently updated
recently updated
newest added
trafficstars

When using model_worker with transformers to run Gemma 2 9B model does not work correctly and the conversation template applied to Gemma 2 model continue to generate response until model_worker...

When I tried to use fastchat.serve.cli, the error was: ` root@4034937c8c66:/mnt/fastchat/FastChat-main# CUDA_VISIBLE_DEVICES=3 python3 -m fastchat.serve.cli --model /mnt/gemma2Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 12/12 [01:08

Hi, I have been passing the adapter weights path for the mistral 7b v0.3 model to gen_model_answer.py script as the model path. I obtained satisfactory results from it, but I...

When the Llama 3.1 70B model is loaded in FastChat, the `/token_check` endpoint reports a context length of 1M instead of the expected 128K. ```json { "prompts": [ { "fits":...

This is needed to load Llama 3.1-8b on an RTX 3090 Otherwise we run out of memory ## Why are these changes needed? ## Related issue number (if applicable) ##...

Hi, upon reading the blog post of Vicuna, I see it stated that: "Our training recipe builds on top of Stanford’s alpaca with the following improvements. - Multi-turn conversations: We...

While using langchain integration with fastchat, i tried out the function calling API of OpenAI with Vicuna 7B v1.3 but i am getting an AttributeError. How can i get structured...

When I use fastchat to finetune llama2, everything is ok. But when I want to finetune mistral, it shows that "transformer layer not found". I know the main reason is...

请问是否有计划更新openai_api_server.py内容去兼容最新的API的输入tools、tool_choice和输出tool_calls。

"Cannot read properties of undefined (reading 'originalname')