lm-evaluation-harness
lm-evaluation-harness copied to clipboard
chatglm2 acc=0 on lambada_openai dataset, is it correct?
I also validate chatglm and chatglm3, they are works, could you have fix the root cause? https://huggingface.co/THUDM/chatglm2-6b/discussions/97 python main.py --model hf-causal --model_args pretrained=THUDM/chatglm2-6b,trust_remote_code=True --tasks lambada_openai --limit 10 --batch_size 1 --no_cache
When I run this I get AttributeError: property 'pad_token_id' of 'ChatGLMTokenizer' object has no setter. You're seeing it run though?
as the code show, https://huggingface.co/THUDM/chatglm2-6b/blob/main/tokenization_chatglm.py#L91
yes, correct, the pad_token or pad_token_id is property and has no setter.
I comment tokenizer.pad_token = tokenizer.eos_token, I can get the acc about chatglm&chatglm3, but acc=0 for chatglm2.
Could you give me some help?
ChatGLM-6B and ChatGLM3-6b both seem to work fine. I'm still unsure as to what's going wrong on ChatGLM2 and would very much like to figure out how to fix it.