FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
I'd like ask questions with multiple paragraphs. According to the model itself: ```txt USER: how can I type new line here? ASSISTANT: To create a new line in this text...
Hi, I have a question. here: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py#L42 in the case of Vicuna / multi-round, there is a '' added at the end of each response. However, I am wondering if...
Hi, I got the following error when loading llama-7B model (ported to huggingface), my server has 256GB RAM on board. Is there any option to reduce the RAM consumption? Thanks....
`python3 -m fastchat.serve.test_message --model-name vicuna-7b` Models: ['vicuna-7b'] worker_addr: http://localhost:21002 A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the...
it appears that the api can't be used in a jupyther notebook: RuntimeError Traceback (most recent call last) Cell In[5], line 6 2 from fastchat import client 4 client.set_baseurl('http://127.0.0.1:8000/') When...
as title when i want to run in gpu, i got the error.
This PR adds support for a subset of OpenAI API features, including completion, create embeddings, and chat completion. With these changes, users will be able to leverage the local LLM...
Is this with data until 2021 like in chat GPT or newer? I want to use it to build code for the latest release of pymc.
Hi, May I ask how long does it take to train Vicuna-7B and Vicuna-13B? In addition, what is the price for the GPUs you are using? Thank you!
Hello, Several people have reported having this issue for the v1.1 model, which wasn't a bug for the v1.0 one. Do you plan on fixing that in the future?