RAIN
RAIN copied to clipboard
test on HH-RLHF
I see the code and find that in the HH-RLHF dataset you use the red-team data for test. I want to know how the test scores are calculated? I didnt find ground-truth in the red-team dataset. How are the scores for harmless and helpful calculated in the paper?
We use GPT-4's evaluation as the ground-truth. We also show that GPT-4 and human share similar evaluation results in the paper.
We use GPT-4's evaluation as the ground-truth. We also show that GPT-4 and human share similar evaluation results in the paper.
I got an output file named res_0.json
which contains outputs of LLM. Do I need to put the outputs into GPT4 API to get the evaluation as the groundtruth? It means that there isn't an evaluation process in the code now, right?
Thank you for your code and effort and hope for your reply!