Qwen-VL
Qwen-VL copied to clipboard
[BUG] <title>合并lora权重时遇到AttributeError: 'QWenTokenizer' object has no attribute 'IMAGE_ST'问题
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
No response
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
备注 | Anything else?
No response
same error~
@rover5056 @lainxx
I put super().init() under IMAGE_ST in tokenization_qwen.py
@rover5056 @lainxx I put super().init() under IMAGE_ST in tokenization_qwen.py
Thank you, resolved
work for me,thanks
work for me,thanks
刚看到,你改的是.cache里的文件,这个应该是运行的时候把你的文件放过去的,然后在.cache里运行,所以你在这儿改了是不生效的,你得到你下载的ckpt的路径里面去改
2024-05-11 14:47:03 "Hao Lu" @.***> 写道:
您好~我发现我的tokenization_qwen.py默认是
def__init__(
self,
vocab_file,
errors="replace",
image_start_tag='',
image_end_tag='',
image_pad_tag='
以至于我还是报这个错,而我每次把super().init(**kwargs)放在IMAGE_ST下面后,好像每次AutoPeftModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()读取的时候都会把这个文件重新写回去,我发现这里是读取的.cache/huggingface/modules/transformers_modules/checkpoint-2383/tokenization_qwen.py?有没有佬可以解惑一下?谢谢!
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
maybe all of us can try the version of transformers==4.32, official suggested, this version can fix this error
刚看到,你改的是.cache里的文件,这个应该是运行的时候把你的文件放过去的,然后在.cache里运行,所以你在这儿改了是不生效的,你得到你下载的ckpt的路径里面去改 2024-05-11 14:47:03 "Hao Lu" @.***> 写道: 您好~我发现我的tokenization_qwen.py默认是 def__init__( self, vocab_file, errors="replace", image_start_tag='
', image_end_tag='', image_pad_tag='
', ref_start_tag='', ref_end_tag='', box_start_tag=' ', box_end_tag=' ', quad_start_tag='', quad_end_tag=' ', **kwargs, ): super().init(*kwargs) self.image_start_tag=image_start_tagself.image_end_tag=image_end_tagself.image_pad_tag=image_pad_tagself.ref_start_tag=ref_start_tagself.ref_end_tag=ref_end_tagself.box_start_tag=box_start_tagself.box_end_tag=box_end_tagself.quad_start_tag=quad_start_tagself.quad_end_tag=quad_end_tagself.IMAGE_ST= ( ref_start_tag, ref_end_tag, box_start_tag, box_end_tag, quad_start_tag, quad_end_tag, image_start_tag, image_end_tag, image_pad_tag ) 以至于我还是报这个错,而我每次把super().init(kwargs)放在IMAGE_ST下面后,好像每次AutoPeftModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()读取的时候都会把这个文件重新写回去,我发现这里是读取的.cache/huggingface/modules/transformers_modules/checkpoint-2383/tokenization_qwen.py?有没有佬可以解惑一下?谢谢! — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
同样的问题