Online-RLHF icon indicating copy to clipboard operation
Online-RLHF copied to clipboard

Questions About Data chosen Strategies

Open nantenT opened this issue 1 year ago • 2 comments

Hi, amazing work, and thank you for making it open source!

  1. After reviewing your code, I noticed multiple preference strategies are included when selecting DPO preference pairs. Have you compared these strategies, and if so, which one tends to perform better?

  2. When incorporating chosen preference data (SFT) into the original model, if the data distribution of the original model's outputs is completely inconsistent with the chosen data and of lower quality, would you recommend using OOD chosen + generated data as preference pairs for training, or only using preference pairs generated by the original model?

Thanks in advance for your insights!

nantenT avatar Dec 09 '24 08:12 nantenT

Hi, thanks for your interests in our work.

For 1, we have found that the max-min pair performs the best in our experiments.

For 2, we'd suggest to conduct SFT first, then perform online DPO, so that your policy model is good enough to generate reasonable samples. If your policy model is not good enough, the sample efficiency would be very low (best of n + large n to obtain a good example.

hendrydong avatar Dec 09 '24 09:12 hendrydong

Thank you so much for taking the time to respond! I truly appreciate your insights.

Regarding point 2, I wanted to seek further clarification: does the process involve performing SFT first, followed by DPO? Specifically, is the SFT step meant to align the distribution by fine-tuning on the chosen outputs from the DPO preference pairs. If so, does this imply that the chosen output in DPO pairs needs to be of particularly high quality?

Or is it sufficient to use open-source instruction-tuning datasets to bring the model to a usable level, without focusing on the differences between the SFT data and DPO pairs? In such a case, would the primary criterion for determining the usability of DPO pairs simply rely on the RM's scoring as rejected sample.

Thank you again for your patience and for sharing your expertise!

nantenT avatar Dec 10 '24 09:12 nantenT