instruct-eval icon indicating copy to clipboard operation
instruct-eval copied to clipboard

Can not reproduce results on the table

Open simplelifetime opened this issue 2 years ago • 7 comments

Great thanks for your work! I try exacy the same setting but I got different results on MMLU and BBH. The alpaca-tuned llama always perform worse than original llama(7B or 13B). Is there anything wrong with the loaded models?

simplelifetime avatar Apr 21 '23 15:04 simplelifetime

Thanks for raising this issue, we are currently investigating it. Based on initial checking, it may be due to changed behavior of LlamaTokenizer when we upgraded transformer library version (git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c to 4.28.1)

chiayewken avatar Apr 24 '23 03:04 chiayewken

Thanks a lot! So can the provided transformers version(git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c ) help me reproduce the correct results?

simplelifetime avatar Apr 24 '23 03:04 simplelifetime

We are currently retesting the models, but it would be a great help if you could also try with the older transformers version (pip install git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c). If you can also reproduce the results, then we know the cause of the issue for sure, and we can revert to this transformers version in the short term. In the long term, we may need to debug the LlamaTokenizer in the newer library version

chiayewken avatar Apr 24 '23 03:04 chiayewken

We have confirmed that the problem is due to transformers library version, this has been fixed in the latest commit. For example, the command python main.py mmlu --model_name llama --model_path chavinlo/alpaca-native gives a result of Average accuracy: 0.416. Would you mind trying on your end if it is fixed for you too?

chiayewken avatar Apr 24 '23 06:04 chiayewken

We have confirmed that the problem is due to transformers library version, this has been fixed in the latest commit. For example, the command python main.py mmlu --model_name llama --model_path chavinlo/alpaca-native gives a result of Average accuracy: 0.416. Would you mind trying on your end if it is fixed for you too?

Thanks for your reply. I've tried again and the result seems to be fine. Can you provide more details about the cause of this problem, so I won't have such problems as the version conflict in the future. I'd be very grateful!

simplelifetime avatar Apr 24 '23 07:04 simplelifetime

No problem, we are still working to ensure that the issue is fully resolved in the newer transformer version, it is a subtle issue as the newer LlamaTokenizer tokenizes whitespace a bit differently

chiayewken avatar Apr 24 '23 11:04 chiayewken

Hi, May I ask the update about this issue.

sglucas avatar Oct 26 '23 18:10 sglucas