[Usage]: ValueError: The checkpoint you are trying to load has model type `qwen2_5_vl` but Transformers does not recognize this architecture
Your current environment
raise e
File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/config.py", line 225, in get_config
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/configuration_auto.py", line 1073, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `qwen2_5_vl` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
How would you like to use vllm
docker run -it --name Qwen2.5-VL-7B-Instruct \
--gpus all \
-e TZ=Asia/Shanghai \
-v /data/modelsfiles/:/root/model \
-p 8444:8000 \
--ipc=host \
vllm/vllm-openai:v0.7.2 \
--model /root/model/Qwen2.5-VL-7B-Instruct \
--tensor-parallel-size 2 \
--pipeline-parallel-size 2 \
--max-num-batched-tokens 131072 \
--max-num-seqs 4 \
--served-model-name Qwen2.5-VL-7B-Instruct
Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The class Qwen2_5_VLProcessor has been removed from the library transformer4.49.0. However, the file qwen2_5_vl.py in vllm still references this class, leading to an import error.
how to deal with this problem?
The class
Qwen2_5_VLProcessorhas been removed from the librarytransformer4.49.0. However, the fileqwen2_5_vl.pyinvllmstill references this class, leading to an import error.
how to deal with this problem?
The class
Qwen2_5_VLProcessorhas been removed from the librarytransformer4.49.0. However, the fileqwen2_5_vl.pyinvllmstill references this class, leading to an import error.
use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0
using transformers v4.49.0.dev0 seems convenient.
how to deal with this problem?
The class
Qwen2_5_VLProcessorhas been removed from the librarytransformer4.49.0. However, the fileqwen2_5_vl.pyinvllmstill references this class, leading to an import error.use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0
using transformers v4.49.0.dev0 seems convenient.
how to deal with this problem?
The class
Qwen2_5_VLProcessorhas been removed from the librarytransformer4.49.0. However, the fileqwen2_5_vl.pyinvllmstill references this class, leading to an import error.use transformers v4.49.0.dev0 OR modify models/qwen2_5_vl.py change Qwen2_5_VLProcessor to AutoImageProcessor from transformers v4.49.0
yeah, but I got a new issue: The Transformers implementation of Qwen2_5_VLForConditionalGeneration is not compatible with vLLM.
and i don't konw how to fix it.
face the same issue,NOW
VLLM seems to have solved this problem,you can try install vllm using original code from github.
VLLM seems to have solved this problem,you can try install vllm using original code from github.
I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'
VLLM seems to have solved this problem,you can try install vllm using original code from github.
I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'
me too, do you know how to solve it ?
I built a container from Docker using the main (4c822298) code and it ran successfully.
You need to set the version of transformers to transformers >= 4.49.0 in requirements-common.txt
Also, remember to update the model, it modified preprocessor_config.json some time ago
Author: shuai bai <[email protected]>
Date: Sat Feb 15 09:34:42 2025 +0000
Update preprocessor_config.json
diff --git a/preprocessor_config.json b/preprocessor_config.json
index df41168..7f3b746 100644
--- a/preprocessor_config.json
+++ b/preprocessor_config.json
@@ -14,6 +14,6 @@
0.26130258,
0.27577711
],
- "image_processor_type": "Qwen2_5_VLImageProcessor",
+ "image_processor_type": "Qwen2VLImageProcessor",
"processor_class": "Qwen2_5_VLProcessor"
}
\ No newline at end of file
I built a container from Docker using the main (
4c822298) code and it ran successfully. You need to set the version of transformers totransformers >= 4.49.0in requirements-common.txtAlso, remember to update the model, it modified
preprocessor_config.jsonsome time agoAuthor: shuai bai <[email protected]> Date: Sat Feb 15 09:34:42 2025 +0000 Update preprocessor_config.json diff --git a/preprocessor_config.json b/preprocessor_config.json index df41168..7f3b746 100644 --- a/preprocessor_config.json +++ b/preprocessor_config.json @@ -14,6 +14,6 @@ 0.26130258, 0.27577711 ], - "image_processor_type": "Qwen2_5_VLImageProcessor", + "image_processor_type": "Qwen2VLImageProcessor", "processor_class": "Qwen2_5_VLProcessor" } \ No newline at end of file
first, i update transformers v4.49.0.dev0
FROM llms/llm-api:vllm-0.7.2
COPY transformers.zip .
RUN pip install transformers.zip
and then: RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue' i update Dockerfile
FROM llms/llm-api:vllm-0.7.2
COPY transformers.zip .
# RUN pip install opencv-python --upgrade
RUN pip install opencv-fixer==0.2.5
# execute
RUN python -c "from opencv_fixer import AutoFix; AutoFix()"
RUN pip install transformers.zip
but now: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.
face the same issue,NOW
set args “--model-impl vllm” to resolve it
I built a container from Docker using the main (
4c822298) code and it ran successfully. You need to set the version of transformers totransformers >= 4.49.0in requirements-common.txt Also, remember to update the model, it modifiedpreprocessor_config.jsonsome time agoAuthor: shuai bai <[email protected]> Date: Sat Feb 15 09:34:42 2025 +0000 Update preprocessor_config.json diff --git a/preprocessor_config.json b/preprocessor_config.json index df41168..7f3b746 100644 --- a/preprocessor_config.json +++ b/preprocessor_config.json @@ -14,6 +14,6 @@ 0.26130258, 0.27577711 ], - "image_processor_type": "Qwen2_5_VLImageProcessor", + "image_processor_type": "Qwen2VLImageProcessor", "processor_class": "Qwen2_5_VLProcessor" } \ No newline at end of filefirst, i update transformers v4.49.0.dev0
FROM llms/llm-api:vllm-0.7.2 COPY transformers.zip . RUN pip install transformers.zip and then: RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue' i update Dockerfile
FROM llms/llm-api:vllm-0.7.2 COPY transformers.zip . # RUN pip install opencv-python --upgrade RUN pip install opencv-fixer==0.2.5 # execute RUN python -c "from opencv_fixer import AutoFix; AutoFix()" RUN pip install transformers.zipbut now: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.
Hmm... I also had this problem with transformers 0.49.0 when building with old code.
I solved it yesterday by pulling the latest main branch code and building from scratch with vllm/Dockerfile. Maybe something has changed since the last release, but I haven't found the corresponding commit yet.
VLLM seems to have solved this problem,you can try install vllm using original code from github.
I find new bug : RuntimeError: Failed to import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl because of the following error (look up to see its traceback): module 'cv2.dnn' has no attribute 'DictValue'
I find vllm==0.6.5. The import of cv2 has been skipped. Warning: Your installation of OpenCV appears to be broken: module 'cv2.dnn' has no attribute 'DictValue'.Please follow the instructions at https://github.com/opencv/opencv-python/issues/884 to correct your environment. The import of cv2 has been skipped.
I have solved this problem, but i got this problem (https://github.com/vllm-project/vllm/issues/12988)
me too
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
@EvanSong77 please share your solution
@EvanSong77 please share your solution
If you are using an older version, please refer to this solution (https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ/discussions/8). I recommend you use the new version, I am currently using v0.8.3, and it is okay.
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!