ColossalAI icon indicating copy to clipboard operation
ColossalAI copied to clipboard

about chatGPT three steps

Open leizhu1989 opened this issue 2 years ago • 19 comments

📚 The doc issue

hello author! I don't know training correspondence. Maybe My understanding is wrong 。 In /applications/ChatGPT)/examples/ , as far as I think: 'Train with dummy prompt data' is first step of chatGPT, 'Train the reward model' is second step of chatGPT, but I dont't know the three step by RLHF using Pre-training language model with reward model,and what is about 'Train with real prompt data' step ?

leizhu1989 avatar Feb 17 '23 04:02 leizhu1989

同问,我也想知道如何用ColossalAI实现ChatGPT的三步训练。

zhouzhou12 avatar Feb 21 '23 07:02 zhouzhou12

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


The same question, I also want to know how to use ColossalAI to implement the three-step training of ChatGPT.

Issues-translate-bot avatar Feb 21 '23 07:02 Issues-translate-bot

I think 'Train with dummy prompt data' is the 3rd step of chatGPT,

cloudfool avatar Feb 21 '23 14:02 cloudfool

我也有同样的问题,求指教

Muzzypepper avatar Feb 22 '23 02:02 Muzzypepper

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I have the same problem, please help

Issues-translate-bot avatar Feb 22 '23 02:02 Issues-translate-bot

I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.

Muzzypepper avatar Feb 22 '23 03:02 Muzzypepper

I think 【train_prompts.py】 is the first step to train SFT, 【train_reward_models.py】 is the second step to train RM, 【train_dummy.py】 uses PPO training, initial_model uses the model of the first step, critic_model uses the model of the second step, so this is the RLHF of the third step. As for train_prompts.py, PPOTrainer is also used. Initial_model and critic_model can use the original pre-trained model. I don’t know if this is the case.

thank you for your reply

leizhu1989 avatar Feb 22 '23 03:02 leizhu1989

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

yaoing avatar Feb 22 '23 06:02 yaoing

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?

Muzzypepper avatar Feb 22 '23 06:02 Muzzypepper

The train_dummy.py is copy from train_prompts.py, with only one line of code added for generating dummy data.

As we can see by the figure in the paper, the third step uses the prompt data and the gpt3 model to generate some results, and then uses reinforcement learning to learn how to choose better responses. So I think that the third step is actually doing prompt training as well.

As for the model training, I am also exploring it, and there is a lack of data at the moment.

yaoing avatar Feb 22 '23 06:02 yaoing

Looking at the paper, the first and second steps use prompt data, and the last step does not seem to require prompt data. I'm not sure either. In addition, do you know how to use the trained model for inference or deployment prediction?

I think inference like GPT2, it also predicts word one by one, Then load last trained model can be inference like GPT2

leizhu1989 avatar Feb 22 '23 07:02 leizhu1989

As for the model training, I am also exploring it, and there is a lack of data at the moment.

ok,my qq:805650606

leizhu1989 avatar Feb 22 '23 07:02 leizhu1989

Thanks for your reply!

Muzzypepper avatar Feb 22 '23 08:02 Muzzypepper

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?

cloudfool avatar Feb 22 '23 09:02 cloudfool

I think train_prompts.py is the last step. As for the first step, it doesn't seem to be provided in the code and is introduced as a pre-trained model in a later step. We can train it in a fine-tuned way.

Now that we need do the finetune(1st) step by ourselves. Do you know any finetune code that could be integrated into this project?

Training with the Transformers framework is relatively simple, and there are plenty of tutorials on the web for fine-tuning, or you can refer to the official documentation

yaoing avatar Feb 22 '23 11:02 yaoing

Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.

ht-zhou avatar Feb 24 '23 03:02 ht-zhou

Thank you for your feedback, and sorry about late reply. And in /applications/ChatGPT)/examples/ ,we have 3 examples : train_dummy -> show the vanilla way to start training step 3. train_prompts -> use prompts to train in training step 3 trian_reward_model -> to train rm in training step 2 Because training step 1 is a simple supervised finetune progress as many other models, we don't implement it here.

thanks! Could you pls add a vanilla infer code for chatgpt?

cloudfool avatar Feb 24 '23 05:02 cloudfool

Could you show this simple SFT code ?

wqw547243068 avatar Mar 05 '23 15:03 wqw547243068

i have the same problem,too~Could you show this simple SFT code ?

graciechen avatar Mar 07 '23 02:03 graciechen

Hi @graciechen @wqw547243068 @cloudfool We have updated a lot. Please check the latest code and doc. https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat/examples This issue was closed due to inactivity. Thanks.

binmakeswell avatar Apr 18 '23 11:04 binmakeswell