[BUG] <title>Error while running `Chat with video`
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
When running the code in Chat with video
https://github.com/OpenBMB/MiniCPM-V/blob/ab1141ee450ee1a3b45b3005eb1f5d4e8811b7eb/README.md?plain=1#L1416-L1469
using the video v_4LF0hL-mgks.mp4 and question: "is the person first appears in the video outdoor" in VCGBench, I have got the following traceback:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/**/chat_with_video.py", line 41, in <module>
answer = model.chat(
File "/~/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/modeling_minicpmv.py", line 348, in chat
inputs = processor(
File "/~/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 67, in __call__
return self._convert_images_texts_to_inputs(image_inputs, text, max_slice_nums=max_slice_nums, use_image_id=use_image_id, max_length=max_length, **kwargs)
File "/~/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 165, in _convert_images_texts_to_inputs
input_ids, image_bounds = self._convert(final_text, max_length)
File "/~/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 121, in _convert
image_bounds = torch.hstack(
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 124 but got size 123 for tensor number 1 in the list.
Could you please help me to solve the problem? Thaks!
期望行为 | Expected Behavior
Successfully return the anwser.
复现方法 | Steps To Reproduce
- In the environment of MiniCPM
- Run the code in https://github.com/OpenBMB/MiniCPM-V/blob/ab1141ee450ee1a3b45b3005eb1f5d4e8811b7eb/README.md?plain=1#L1416-L1469
- change the
video fileto [v_4LF0hL-mgks.mp4](https://drive.google.com/file/d/1osAM0waEP23fvHtrIcgsdqfFkUgO hlq9/view?usp=sharing) (can be downloaded clicking the link) andquestion:"is the person first appears in the video outdoor" - See error:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/mnt/afs_intern/shuyan/experiments/MiniCPM-V/video_chat.py", line 41, in <module>
answer = model.chat(
File "/mnt/afs/shuyan/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/modeling_minicpmv.py", line 348, in chat
inputs = processor(
File "/mnt/afs/shuyan/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 67, in __call__
return self._convert_images_texts_to_inputs(image_inputs, text, max_slice_nums=max_slice_nums, use_image_id=use_image_id, max_length=max_length, **kwargs)
File "/mnt/afs/shuyan/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 165, in _convert_images_texts_to_inputs
input_ids, image_bounds = self._convert(final_text, max_length)
File "/mnt/afs/shuyan/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-V-2_6/4e4be000cd81feda8b96d14b53f1791b4010b038/processing_minicpmv.py", line 121, in _convert
image_bounds = torch.hstack(
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 124 but got size 123 for tensor number 1 in the list.
运行环境 | Environment
- OS:Ubuntu
- Python: 3.10.14
- Transformers: 4.40.0
- PyTorch:2.1.2
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.1
备注 | Anything else?
No response
hi, I had the same error. Change params["max_slice_nums"] = 2 to 1 seems to work. It is suggested by the author that # use 1 if cuda OOM and video resolution > 448*448
It is also mentioned in a similar issue that the video resolution should be limited. https://github.com/OpenBMB/MiniCPM-V/issues/509
Yes, it's true that change max_slice_nums to 1 avoid the error.
marked