instruct-eval
instruct-eval copied to clipboard
Can not reproduce results on the table
Great thanks for your work! I try exacy the same setting but I got different results on MMLU and BBH. The alpaca-tuned llama always perform worse than original llama(7B or 13B). Is there anything wrong with the loaded models?
Thanks for raising this issue, we are currently investigating it. Based on initial checking, it may be due to changed behavior of LlamaTokenizer when we upgraded transformer library version (git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c to 4.28.1)
Thanks a lot! So can the provided transformers version(git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c ) help me reproduce the correct results?
We are currently retesting the models, but it would be a great help if you could also try with the older transformers version (pip install git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c). If you can also reproduce the results, then we know the cause of the issue for sure, and we can revert to this transformers version in the short term. In the long term, we may need to debug the LlamaTokenizer in the newer library version
We have confirmed that the problem is due to transformers library version, this has been fixed in the latest commit. For example, the command python main.py mmlu --model_name llama --model_path chavinlo/alpaca-native gives a result of Average accuracy: 0.416. Would you mind trying on your end if it is fixed for you too?
We have confirmed that the problem is due to transformers library version, this has been fixed in the latest commit. For example, the command
python main.py mmlu --model_name llama --model_path chavinlo/alpaca-nativegives a result ofAverage accuracy: 0.416. Would you mind trying on your end if it is fixed for you too?
Thanks for your reply. I've tried again and the result seems to be fine. Can you provide more details about the cause of this problem, so I won't have such problems as the version conflict in the future. I'd be very grateful!
No problem, we are still working to ensure that the issue is fully resolved in the newer transformer version, it is a subtle issue as the newer LlamaTokenizer tokenizes whitespace a bit differently
Hi, May I ask the update about this issue.