promptbase icon indicating copy to clipboard operation
promptbase copied to clipboard

Clarification needed in evaluation numbers

Open saurabhkumar8112 opened this issue 1 year ago • 5 comments

Hello, Thanks for the repo and awesome work. I am requesting clarification on the evaluation results shown in the repo. image

For humanEval Zero shot, GPT-4's score is reported here as 87.4 but in the Gemini Report and GPT-4 paper(and everywhere else), humanEval score for GPT-4 Zero shot is 67.

image image

Is the "Zero-shot" prompt technique mentioned in the repo followed by Medprompt methodology? If yes, please clarify. For MMLU is explicitly clear but not for others.

Apologies if I missed anything.

saurabhkumar8112 avatar Dec 12 '23 20:12 saurabhkumar8112

@saurabhkumar8112 look into their code, i guess it is standard zero-shot result using newest GPT-4 checkpoint.

dzunglt24 avatar Dec 14 '23 22:12 dzunglt24

Yes @dzunglt24 is right -- we do have all the code we used to run on HumanEval here, and it is zero-shot with the latest GPT-4 checkpoint. The numbers reported in the OpenAI report are from many months ago, and it's likely that there have been both model improvements, and subtlety in prompting differences (even in the zero shot setting) that leads to our improved performance number here.

I believe others have found that the GPT-4 numbers were underreported in the Technical Report as well, e.g. see: https://twitter.com/OwariDa/status/1732423557802782854

Our HumanEval scripts/prompt are: https://github.com/microsoft/promptbase/blob/f43cf97dd81c9595b7aec40a2201797b32532084/src/promptbase/humaneval/humaneval.py#L16

Harsha-Nori avatar Dec 15 '23 20:12 Harsha-Nori

I see. That’s good to know. Then that means the Gemini Report had under-reported numbers for GPT4(as the numbers were from old checkpoint)?

saurabhkumar8112 avatar Dec 16 '23 06:12 saurabhkumar8112

UYQA~KAQV~8743~RXXR~MUUB~2GTT~RFF8

Divkovicalex75 avatar Dec 16 '23 07:12 Divkovicalex75

I see. That’s good to know. Then that means the Gemini Report had under-reported numbers for GPT4(as the numbers were from old checkpoint)?

I believe the Gemini Report cited + pulled the Humaneval numbers directly from OpenAI's initial GPT-4 technical report (which was released in March alongside the first version of the model). We just happened to run our own zero-shot prompts against a more recent checkpoint so we have updated numbers here.

Harsha-Nori avatar Dec 18 '23 05:12 Harsha-Nori