Wang, Chang
Wang, Chang
as the code show, https://huggingface.co/THUDM/chatglm2-6b/blob/main/tokenization_chatglm.py#L91 yes, correct, the `pad_token` or `pad_token_id` is `property` and has `no setter`. I comment `tokenizer.pad_token = tokenizer.eos_token`, I can get the acc about chatglm&chatglm3, but...
ITREX PR https://github.com/intel/intel-extension-for-transformers/pull/1559 apply the feature, @YIYANGCAI Could you help me double check if it has been applied ?
format scan improved by https://github.com/intel/intel-extension-for-transformers/pull/1647. merged.
https://github.com/intel/intel-extension-for-transformers/blob/main/examples/huggingface/pytorch/text-to-image/quantization/qat/requirements.txt please install the requirements.txt under the qat example folder, the accelerate is necessary in the requirements.txt, but the error message showed the env missed the package, the transformers version...
Phi, Mixtral has been added to transformers, so add the Phi and Mixtral with #1625 first. chatglm will add it again when https://github.com/huggingface/transformers/pull/27883 ready.
could you confirm `torch.add` in `set_local` can make it fallback? save the qmodel config.json file check maybe more clear.