Yuanqing Wang

Results 6 comments of Yuanqing Wang

> Hi @brandon-lockaby Please see: #31358 for the final fix, let me know if that fixes your issue. It fixes the same issue I had locally Hi @younesbelkada, I have...

> Hi @Lin-xs Thanks a lot! hmmm indeed there might be a bug when not using autoclasses, can you try to load the model with `AutoModelForCausalLM` instead of `LlamaModelForCausalLM` Thank...

Hi @younesbelkada @Isotr0py , I encountered a bug when trying to use `AutoModelForCausalLM` to load the `QuantFactory/Qwen2-7B-GGUF` model. Here is the code I used: ```python from transformers import AutoTokenizer, AutoModelForCausalLM...

I think probably this is because the default vocab_size of `Qwen2Config` is set to `151936` in [configuration_qwen2.py](https://github.com/huggingface/transformers/blob/9af1b6a80adbac906ba770d23ddf95a147f2f0a0/src/transformers/models/qwen2/configuration_qwen2.py#L96) and the config loaded from Qwen2 gguf file do not have `"vocab_size"`: ```python...

> Hey! I think this was recently fixed, so installing `4.42.xxx` should work. I just tested locally: Make sure to install `pip install -U transformers` Thank you @ArthurZucker , now...