FastChat
FastChat copied to clipboard
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
such as, how to generate training sample step by step from original file? in addtion, what are the content of the sharegpt.json, as following: python3 -m fastchat.data.sample --in sharegpt.json --out...
The reason for this bug?
…. Still working on "generate_stream" portion. Removed extraneous buttons that could best be used elsewhere. Added scripts to make everything easier
Hi, I have added the following notebooks to demonstrate the functionality of FastChat package: - [tutorial.ipynb](tutorial.ipynb): An End-to-End tutorial on how to use FastChat to interact with a chatbot. -...
We provide a ChatGPT-compatible Restful server api for chat completion. Will add the client API in a separate PR. Then in theory we can integrate FastChat with all community tools...
When I start vicuna-13B today, after it prints the answer, it starts answering itself each time. Is that normal?Or is this a new feature? The output with three '#' in...
### Can anyone provide me with steps on how to do that ? As i am not much familiar with documents of ML / AI so please some context and...
how can convert the target of 'python3 -m fastchat.model.apply_delta' to llama.cpp
Issue:"Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0"
My GPU is 8*TeslsV100 32G, the software environment is python3.10+cuda11.6+torch2.0.0+ transformers4.28.0.dev0, I run the fine tuning code: torchrun --nproc_per_node=8 --master_port=20001 fastchat/train/train_mem.py \ --model_name_or_path /llama-13b \ --data_path alpaca-data-conversation.json \ --bf16 True...