[Bug] lmdeploy 0.7.2-cu12 inference error while serving MiniCPM-V-2.6
Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
It's OK in version 0.6.3, but error in 0.7.2. And other VLM such as Qwen2-VL-7B is OK.
how to start
CUDA_VISIBLE_DEVICES=7 lmdeploy serve api_server /space/llms/mllm/MiniCPM-V-2_6 --model-name MiniCPM-V-2_6 --server-port 5000
Reproduction
curl --location 'http://10.1.252.118:5000/v1/chat/completions'
--header 'Content-Type: application/json'
--data '{
"model": "MiniCPM-V-2_6",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "这是什么?"
},
{
"type": "image_url",
"image_url": {
"url": "http://gips3.baidu.com/it/u=3886271102,3123389489&fm=3028&app=3028&f=JPEG&fmt=auto?w=1280&h=960"
}
}
]
}
],
"stream": false
}'
Environment
/opt/py3/lib/python3.10/site-packages/torch/cuda/__init__.py:129: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
sys.platform: linux
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: False
MUSA available: False
numpy_random_seed: 2147483648
GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.5.1+cu121
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
TorchVision: 0.20.1+cu121
LMDeploy: 0.7.2+6f1277e
transformers: 4.49.0
gradio: 5.22.0
fastapi: 0.115.11
pydantic: 2.10.6
triton: 3.1.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PXB PXB PXB SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU1 PXB X PXB PXB SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU2 PXB PXB X PIX SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU3 PXB PXB PIX X SYS SYS SYS SYS 0-35,72-107 0 N/A
GPU4 SYS SYS SYS SYS X PXB PXB PXB 36-71,108-143 1 N/A
GPU5 SYS SYS SYS SYS PXB X PXB PXB 36-71,108-143 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X PIX 36-71,108-143 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB PIX X 36-71,108-143 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Error traceback
INFO: 10.11.26.110:57484 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/py3/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/opt/py3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/opt/py3/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/opt/py3/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/opt/lmdeploy/lmdeploy/serve/openai/api_server.py", line 497, in chat_completions_v1
async for res in result_generator:
File "/opt/lmdeploy/lmdeploy/serve/async_engine.py", line 663, in generate
prompt_input = await self._get_prompt_input(prompt,
File "/opt/lmdeploy/lmdeploy/serve/vl_async_engine.py", line 76, in _get_prompt_input
results = await self.vl_encoder.preprocess(messages)
File "/opt/lmdeploy/lmdeploy/vl/engine.py", line 48, in preprocess
outputs = await future
ValueError: operands could not be broadcast together with shapes (560,364,3) (3,3)
The title shows inference error while serving MiniCPM-V-2.6
However, in the content, the model is ogvlm2-llama3-chinese-chat-19B
So, which one is correct?
sorry,I will repaste the error.
The title shows
inference error while serving MiniCPM-V-2.6However, in the content, the model isogvlm2-llama3-chinese-chat-19BSo, which one is correct?
I have edited the content of start command.
im having the same issue on a single 3090