starcoder2-self-align icon indicating copy to clipboard operation
starcoder2-self-align copied to clipboard

Reproducing Humaneval Benchmark Results

Open YSLIU627 opened this issue 1 year ago • 0 comments

Hi, we re-ran the training phase and evaluated the trained model by the evaluation script in your repo. However, we find that there is a performance in the Humaneval benchmark between the trained model and the release score in the paper. We also evaluate the released model in the hugging face and report the results as follows. 20240806-135618 Here the first column is the released score in the paper, the second column is the evaluation result of the released model, and the last column is the evaluation result of our re-trained model. We did not modify any hyper-parameters before training and found that the loss curve of our re-trained model is identical to the one that you released in the other issue (issue #6). We are not sure if you evaluate the model that is saved in the end of the training or some intermediate checkpoint (for example, checkpoint-2000). We will appreciate it quite a lot if you could offer the help!

YSLIU627 avatar Aug 06 '24 06:08 YSLIU627