Qwen-VL
Qwen-VL copied to clipboard
[BUG] RuntimeError: GET was unable to find an engine to execute this computation
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
运行 python web_demo_mm.py 时,文本是可以对话的,但是输入图片之后 会报错
User: 分析一下商品的内容 Traceback (most recent call last): File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/queueing.py", line 501, in call_prediction output = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/route_utils.py", line 258, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/blocks.py", line 1684, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/blocks.py", line 1262, in call_function prediction = await utils.async_iteration(iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/utils.py", line 574, in async_iteration return await iterator.anext() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/utils.py", line 567, in anext return await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/utils.py", line 550, in run_sync_iterator_async return next(iterator) ^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/gradio/utils.py", line 733, in gen_wrapper response = next(iterator) ^^^^^^^^^^^^^^ File "/root/wangshiqi/Qwen-VL/web_demo_mm.py", line 130, in predict for response in model.chat_stream(tokenizer, message, history=history): File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 1021, in stream_generator for token in self.generate_stream( File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/transformers_stream_generator/main.py", line 931, in sample_stream outputs = self( ^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 856, in forward transformer_outputs = self.transformer( ^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/modeling_qwen.py", line 565, in forward images = self.visual.encode(images) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/visual.py", line 426, in encode return self(images) ^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/Qwen-VL-Chat/visual.py", line 398, in forward x = self.conv1(x) # shape = [, width, grid, grid] ^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anaconda3/envs/aaa/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS: Linux
- Python: 3.10
- Transformers:
- PyTorch: 2.2.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.1
备注 | Anything else?
No response
for token in self.generate_stream(
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) File "/usr/local/lib/python3.8/dist-packages/transformers_stream_generator/main.py", line 931, in sample_stream outputs = self( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/qwen-vl-model/modeling_qwen.py", line 856, in forward transformer_outputs = self.transformer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/qwen-vl-model/modeling_qwen.py", line 565, in forward images = self.visual.encode(images) File "/root/.cache/huggingface/modules/transformers_modules/qwen-vl-model/visual.py", line 426, in encode return self(images) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "/root/.cache/huggingface/modules/transformers_modules/qwen-vl-model/visual.py", line 398, in forward x = self.conv1(x) # shape = [, width, grid, grid] File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: CUDNN_BACKEND_CONVOLUTION_DESCRIPTOR: SetAttribute CUDNN_ATTR_CONVOLUTION_CONV_MODE Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED
我也一样
怎么解决,好喜欢 vl的 震撼效果哦呀。 运行 python web_demo_mm.py 时,文本是可以对话的,但是输入图片之后 会报错 @shiqwang 好奇怪,尝试了所有的可能。还是如此奥。
已解决,可能当时编译了llama.cpp吧。重新搞了个镜像 deepo-pytorch-py39-cu120 ,再次来一遍。版本还是那些版本。没问题了。
量化模型10G显存
我在执行微调后推理也遇到了类似的错误,通过合并权重可以进行推理,但是如果想要在已经合并的权重上再进行微调,也会报这个错误,不知道应该如何解决 我使用的环境是: python 3.10 torch 2.2.1+cu118 同时我也进行了环境测试,cuda,torch,cudnn都是没有问题的,这个目前有什么解决方案吗
Add this to ur code
torch.backends.cudnn.enabled = False