vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Usage]: ValueError: The checkpoint you are trying to load has model type `qwen2_5_vl` but Transformers does not recognize this architecture

Open SwarmKit opened this issue 10 months ago • 16 comments

Your current environment

    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 225, in get_config
    config = AutoConfig.from_pretrained(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1073, in from_pretrained
    raise ValueError(
ValueError: The checkpoint you are trying to load has model type `qwen2_5_vl` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`

How would you like to use vllm

docker run -it --name Qwen2.5-VL-7B-Instruct \
--gpus all \
-e TZ=Asia/Shanghai \
-v /data/modelsfiles/:/root/model \
-p 8444:8000 \
--ipc=host \
vllm/vllm-openai:v0.7.2 \
--model /root/model/Qwen2.5-VL-7B-Instruct \
--tensor-parallel-size 2 \
--pipeline-parallel-size 2 \
--max-num-batched-tokens 131072 \
--max-num-seqs 4 \
--served-model-name Qwen2.5-VL-7B-Instruct

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

SwarmKit avatar Feb 18 '25 02:02 SwarmKit

The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.

myg133 avatar Feb 18 '25 03:02 myg133

how to deal with this problem?

The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.

LaoWangGB avatar Feb 18 '25 04:02 LaoWangGB

how to deal with this problem?

The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.

use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0

myg133 avatar Feb 18 '25 04:02 myg133

using transformers v4.49.0.dev0 seems convenient.

how to deal with this problem?

The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.

use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0

LaoWangGB avatar Feb 18 '25 05:02 LaoWangGB

using transformers v4.49.0.dev0 seems convenient.

how to deal with this problem?

The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.

use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0

yeah, but I got a new issue: The Transformers implementation of Qwen2_5_VLForConditionalGeneration is not compatible with vLLM.

and i don't konw how to fix it.

myg133 avatar Feb 18 '25 10:02 myg133

face the same issue,NOW

OKC13 avatar Feb 19 '25 01:02 OKC13

VLLM seems to have solved this problem,you can try install vllm using original code from github.

LaoWangGB avatar Feb 19 '25 03:02 LaoWangGB

VLLM seems to have solved this problem,you can try install vllm using original code from github.

I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'

zsj-mdk avatar Feb 19 '25 06:02 zsj-mdk

VLLM seems to have solved this problem,you can try install vllm using original code from github.

I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'

me too, do you know how to solve it ?

EvanSong77 avatar Feb 19 '25 06:02 EvanSong77

I built a container from Docker using the main (4c822298) code and it ran successfully. You need to set the version of transformers to transformers >= 4.49.0 in requirements-common.txt

Also, remember to update the model, it modified preprocessor_config.json some time ago

Author: shuai bai <[email protected]>
Date:   Sat Feb 15 09:34:42 2025 +0000

    Update preprocessor_config.json

diff --git a/preprocessor_config.json b/preprocessor_config.json
index df41168..7f3b746 100644
--- a/preprocessor_config.json
+++ b/preprocessor_config.json
@@ -14,6 +14,6 @@
     0.26130258,
     0.27577711
   ],
-  "image_processor_type": "Qwen2_5_VLImageProcessor",
+  "image_processor_type": "Qwen2VLImageProcessor",
   "processor_class": "Qwen2_5_VLProcessor"
 }
\ No newline at end of file

linzyjx avatar Feb 19 '25 06:02 linzyjx

I built a container from Docker using the main (4c822298) code and it ran successfully. You need to set the version of transformers to transformers >= 4.49.0 in requirements-common.txt

Also, remember to update the model, it modified preprocessor_config.json some time ago

Author: shuai bai <[email protected]>
Date:   Sat Feb 15 09:34:42 2025 +0000

    Update preprocessor_config.json

diff --git a/preprocessor_config.json b/preprocessor_config.json
index df41168..7f3b746 100644
--- a/preprocessor_config.json
+++ b/preprocessor_config.json
@@ -14,6 +14,6 @@
     0.26130258,
     0.27577711
   ],
-  "image_processor_type": "Qwen2_5_VLImageProcessor",
+  "image_processor_type": "Qwen2VLImageProcessor",
   "processor_class": "Qwen2_5_VLProcessor"
 }
\ No newline at end of file

first, i update transformers v4.49.0.dev0

FROM llms/llm-api:vllm-0.7.2
COPY transformers.zip .
RUN pip install transformers.zip 

and then: RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue' i update Dockerfile

FROM llms/llm-api:vllm-0.7.2
COPY transformers.zip .
# RUN pip install opencv-python --upgrade
RUN pip install opencv-fixer==0.2.5
# execute
RUN python -c "from opencv_fixer import AutoFix; AutoFix()"
RUN pip install transformers.zip 

but now: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.

EvanSong77 avatar Feb 19 '25 07:02 EvanSong77

face the same issue,NOW

set args “--model-impl vllm” to resolve it

myg133 avatar Feb 20 '25 01:02 myg133

I built a container from Docker using the main (4c822298) code and it ran successfully. You need to set the version of transformers to transformers >= 4.49.0 in requirements-common.txt Also, remember to update the model, it modified preprocessor_config.json some time ago

Author: shuai bai <[email protected]>
Date:   Sat Feb 15 09:34:42 2025 +0000

    Update preprocessor_config.json

diff --git a/preprocessor_config.json b/preprocessor_config.json
index df41168..7f3b746 100644
--- a/preprocessor_config.json
+++ b/preprocessor_config.json
@@ -14,6 +14,6 @@
     0.26130258,
     0.27577711
   ],
-  "image_processor_type": "Qwen2_5_VLImageProcessor",
+  "image_processor_type": "Qwen2VLImageProcessor",
   "processor_class": "Qwen2_5_VLProcessor"
 }
\ No newline at end of file

first, i update transformers v4.49.0.dev0

FROM llms/llm-api:vllm-0.7.2 COPY transformers.zip . RUN pip install transformers.zip and then: RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue' i update Dockerfile

FROM llms/llm-api:vllm-0.7.2
COPY transformers.zip .
# RUN pip install opencv-python --upgrade
RUN pip install opencv-fixer==0.2.5
# execute
RUN python -c "from opencv_fixer import AutoFix; AutoFix()"
RUN pip install transformers.zip 

but now: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.

Hmm... I also had this problem with transformers 0.49.0 when building with old code. I solved it yesterday by pulling the latest main branch code and building from scratch with vllm/Dockerfile. Maybe something has changed since the last release, but I haven't found the corresponding commit yet.

linzyjx avatar Feb 20 '25 04:02 linzyjx

VLLM seems to have solved this problem,you can try install vllm using original code from github.

I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'

I find vllm==0.6.5. The import of cv2 has been skipped. Warning: Your installation of OpenCV appears to be broken: module 'cv2.dnn' has no attribute 'DictValue'.Please follow the instructions at https://github.com/opencv/opencv-python/issues/884 to correct your environment. The import of cv2 has been skipped.

zsj-mdk avatar Feb 20 '25 06:02 zsj-mdk

I have solved this problem, but i got this problem (https://github.com/vllm-project/vllm/issues/12988)

EvanSong77 avatar Feb 20 '25 07:02 EvanSong77

I have solved this problem, but i got this problem (#12988)

How did you solve this problem?

SonpKing avatar Mar 06 '25 11:03 SonpKing

me too

woodx9 avatar Mar 22 '25 09:03 woodx9

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] avatar Jun 22 '25 02:06 github-actions[bot]

@EvanSong77 please share your solution

Holmes2002 avatar Jun 27 '25 10:06 Holmes2002

@EvanSong77 please share your solution

If you are using an older version, please refer to this solution (https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ/discussions/8). I recommend you use the new version, I am currently using v0.8.3, and it is okay.

EvanSong77 avatar Jun 30 '25 01:06 EvanSong77

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] avatar Sep 28 '25 02:09 github-actions[bot]

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

github-actions[bot] avatar Oct 29 '25 02:10 github-actions[bot]