FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
### What Knowledge cutoff's format is "yyyy/m" or "yyyy/mm" and not always "yyyy/mm", so ordering the models by this column in descending order makes it show "2023/9" before "2023/12" instead...
I want to add vision chat battle + direct vision chat support. GPT-4 Vision and Gemini Vision are multimodal models. along add other multimodal models.
## Why are these changes needed? * I go slightly insane looking at the slightly different link colors * link underlines are very busy * wider window - easier to...
## Why are these changes needed? 1. Add CSAM and NSFW moderation filter. Check the README for how to run. Notably, the NSFW endpoint should be the full endpoint now,...
## Why are these changes needed? Certain advanced models, such as GPT/Yi, have the capability to generate LaTeX formulas enclosed within `\[...\]`, but this format is not support yet. This...
## Why are these changes needed? Reorganize utils into individual modules. Cleanup tech debt. ## Related issue number (if applicable) ## Checks - [x] I've run `format.sh` to lint the...
When attempting to execute the `FastChat\scripts\train_vicuna_7b.sh` script, it raises an exception with the following error message: ``` File "/usr/local/lib/python3.10/dist-packages/transformer_engine/pytorch/transformer.py", line 16, in from flash_attn.flash_attn_interface import flash_attn_unpadded_func ImportError: cannot import name...
## Why are these changes needed? ## Related issue number (if applicable) ## Checks - [x] I've run `format.sh` to lint the changes in this PR. - [x] I've included...
## Why are these changes needed? Description: [ipex-llm](https://github.com/intel-analytics/ipex-llm) is a library for running LLM on Intel CPU/XPU (from Laptop to GPU to Cloud) using INT4/FP4/INT8/FP8 with very low latency (for...
3000字论文