self-instruct
self-instruct copied to clipboard
[QUESTION] Unexpected results by GPT-SelfInstruct+SuperNI
As I understood figure 5 in your paper, you further fine-tuned GPT-SelfInstruct on the SuperNatural Instructions data and surprisingly the results got worse compared to the "vanilla" GPT-SelfInstruct.
Is my understanding correct? If so, do you have any assumptions why a high-quality human annotated dataset as additional fine-tuning data worsened the overall performance?