alignment-handbook icon indicating copy to clipboard operation
alignment-handbook copied to clipboard

DPO loss on different datasets

Open wj210 opened this issue 1 year ago • 0 comments

In parallel with #38, tho i am relating to full training instead of lora.

When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the case of ultrafeedback_binarised.

On my pref dataset (Eval loss) image

on original pref dataset (eval loss) image

train loss (mine) image

original image

reward margin (mine) image

original reward image

This huge diff in scale seems to occur when i use pref datasets that are sampled from the reference policy instead of in the case of ultrafeedback, where it is sampled from various policies.

Moreover this huge decrease in loss actually cause the DPO-ed model to perform worse across various benchmarks. Is there any intuition regarding this?

wj210 avatar Feb 01 '24 15:02 wj210