Zhihong Chen

Results 31 comments of Zhihong Chen

Hi, I think ROCO is too small to perform pre-training and try MedICaT together. For the pre-trained models, find them in the README, where I have already open-sourced.

Hi @Tong010618, Thanks for your attention. You need to specify the path to the weights. E.g. for Phoenix, run ``` python -m llmzoo.deploy.cli --model-name FreedomIntelligence/phoenix-inst-chat-7b ``` Best, Zhihong

Hi @ananwjq, Thanks for reporting the error! We have fixed this error. Please pull the latest version. :-) Best, Zhihong

Hi @REIGN12 , Thanks for your attention! We will release the pre-training data within one week, including both instruction and conversation data. Please stay tuned. :-) Best, Zhihong

Hi @REIGN12, We have released the data for training `Phoenix` and `Chimera`. Thanks for your attention! Best, Zhihong

Hi @BlinkDL , First, thanks for your attention. Also, thanks for your contribution to the open-source community! We have been following the RWKV project for a long time. Sure! We...

Hi @timiil and @anden007, Thanks for your attention. Now, you can deploy a web application. See [here](https://github.com/FreedomIntelligence/LLMZoo#-deployment). Best, Zhihong

Hi @ninisy, Thanks for your attention! We updated experimental settings in the updated [technical report](https://github.com/FreedomIntelligence/LLMZoo/blob/main/assets/llmzoo.pdf). We'll release the training code soon. Best, Zhihong

Hi @ninisy, Thanks for your attention! We have uploaded the training code (see [here](https://github.com/FreedomIntelligence/LLMZoo#-training-by-yourself)). Now you can train `Phoenix`. :-) Best, Zhihong

Hi @dream-desktop, Thanks for your attention. Currently, 6G is not sufficient for running `Phoenix`. We will work on the Quantization version. Or you can try `Phoenix` using cpu: ``` python...