lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

BBH, gsm8k benchmark accuracy mismatch with paper

Open hills-code opened this issue 1 year ago • 6 comments

Thanks for your great job!

I got 23.4 and 15.1 with LLAMA 2 7B in the BBH few shot setting w./wo. COT respectively. However llama paper says their BBH will reach 32 image Also the gsm8k accuracy is not normal as I got 9% in the gsm8k and 0% in gsm8k_cot for llama2-7b. I also try llemma-7b and got 0% in gsm8k_cot. Is there any problem?

hills-code avatar Dec 07 '23 07:12 hills-code

Will look into this asap! @lintangsutawika is investigating BBH. It may be a matter of differing answer extraction code, which we can strive to close the gap as much as possible with Llama for but don’t have access to—will see.

I thought that I had been using gsm8k-CoT successfully with Llemma-7b, strange. Will check it out and fix!

haileyschoelkopf avatar Dec 07 '23 15:12 haileyschoelkopf

#1118 fixes BBH

lintangsutawika avatar Dec 14 '23 07:12 lintangsutawika

@haileyschoelkopf upon investigations, even with the target space issue fixed on my end, I am still getting 0.0 on my end for llama7b with gsm8k_cot.

Here is one of the examples, seems the stopping criteria is broken so it doesn't generate anything other than the first word. Maybe passing in a different stopping criteria with --gen_kwargs would fix it. The same thing happens for the whole llama family chat/none-chat. It's not an isolated bug.

{
    "doc_id": 1,
    "doc": {
      "question": "A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?",
      "answer": "It takes 2/2=<<2/2=1>>1 bolt of white fiber\nSo the total amount of fabric is 2+1=<<2+1=3>>3 bolts of fabric\n#### 3"
    },
    "target": "3",
    "arguments": [
      [
        "Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\nA: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\nA: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\nA: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\nA: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\nA: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\nA: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\nA: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.\n\nQ: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\nA: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.\n\nQ: A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?\n\nA:",
        {
          "until": [
            "Q:",
            "\n\n"
          ],
          "do_sample": false,
          "temperature": 0.0
        }
      ]
    ],
    "resps": [
      [
        "A"
      ]
    ],
    "filtered_resps": [
      "[invalid]"
    ],
    "exact_match": 0.0
  }

kyleliang919 avatar Dec 24 '23 15:12 kyleliang919

Upon investigations, I found something confusing which I can not explain: basically in gsm8k this is causing llama family to stop generation early, when this is removed, it would start generating full responses: https://github.com/EleutherAI/lm-evaluation-harness/blob/e4970d817ae1f8ad1fccab3d77e9ef844d332239/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml#L9

kyleliang919 avatar Dec 24 '23 16:12 kyleliang919

I am suspecting llama tokenizer is encoding " " and "\n\n" as the same token.

kyleliang919 avatar Dec 24 '23 16:12 kyleliang919

Thanks for digging on this, it’s really very much appreciated!

It sounds like the Llama /sentencepiece tokenizer is once again causing subtle issues… if you have the chance to look further do you think you could look in the stopping criteria code and see what the stopsequences tokenize to?

I’m wondering if this has anything to do with an issue that affected OpenLlama where the token had extra whitespace (iirc?).

haileyschoelkopf avatar Dec 24 '23 21:12 haileyschoelkopf

I was able to get 13.2% (the paper reports 14.6%) for llama2 7b by changing \n\n to \n\n\n. There is also an extra space in the target filtering before the number.

shivamag125 avatar Jan 10 '24 12:01 shivamag125

I was able to find the problem that was causing early stopping and pushed a fix in #1268 -- will now look at the whitespace.

haileyschoelkopf avatar Jan 11 '24 17:01 haileyschoelkopf

Closing this as it appears to be completed.

StellaAthena avatar Feb 15 '24 06:02 StellaAthena