FastChat icon indicating copy to clipboard operation
FastChat copied to clipboard

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Results 766 FastChat issues
Sort by recently updated
recently updated
newest added

any one has evaluate the performance of programming? and how to fine tuning a programming language, such as Java, C or others?

question

Based on the paper from Bloomberg GPT, the reference here https://arxiv.org/pdf/2303.17564v1.pdf They mention sequence length of more than 2048 can be used during inference https://paperswithcode.com/method/alibi Can the LLAMA model be...

question

hugging-gpt is nice but what about hugging-vicuna ?

enhancement

Can lm-sys developer shed light on why the training was done with evaluation turned off? ``` --evaluation_strategy "no" ``` Reason I ask is the cost to train, 4-8x A100 is...

question

Thanks a lot for the great contribution! Can Gradio serving expose more parameters, such the top_p stuff? How to modify the code to do that? Thanks again!

good first issue

So far all my attempts, with different models, sizes, and datasets have led to one issue: the evaluation loss keeps increasing. see my log ![image](https://user-images.githubusercontent.com/738834/231302840-4d58ff2e-1022-440e-9484-6ae39e708897.png)

question

Is there any possibility of integrating Vicuna with Lanchain. Both Vicuna and Lanchain, I believe that integrating these two powerful tools would greatly enhance their capabilities and provide users with...

enhancement

When entering structured text (such as JSON), there is a possibility of receiving the following error message: "Too many requests in 1 hour. Try again later." Why does this situation...

failed-prompt

RuntimeError: probability tensor contains either `inf`, `nan` or element < Whatever I input, it will raise this RuntimeError Human: what can you do? Assistant: │ 101 │ token = int(torch.argmax(last_token_logits))...

bug

Is there any plan to increase the limit input limit for the model. Right now, i guess there is a limit of 2048 tokens.. Are you guys planning to increase...