paddleocr-vl-api容器报错:【RuntimeError: Exception from the 'vlm' worker: Connection error.】
🔎 Search before asking
- [x] I have searched the PaddleOCR Docs and found no similar bug report.
- [x] I have searched the PaddleOCR Issues and found no similar bug report.
- [x] I have searched the PaddleOCR Discussions and found no similar bug report.
🐛 Bug (问题描述)
一、镜像信息
aigc@aigc-SYS-4028GR-TR2-1-EC028:/raid/aigc/ocr/paddleocr_vl$ docker image ls | grep paddle ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-vl latest-offline 0843f08c7c70 29 hours ago 10.6GB ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-genai-vllm-server latest-offline 4de41c8b4b6f 7 days ago 14.7GB
二、部署 paddleocr-genai-vllm-server
cat /raid/aigc/ocr/paddleocr_vl/vllm_config.yaml
注:输出信息如下:
gpu-memory-utilization: 0.65
max-num-seqs: 128
docker run
-it
--rm
--gpus all
--network host
--name paddleocr-genai-vllm-server
-v /raid/aigc/ocr/paddleocr_vl/vllm_config.yaml:/home/paddleocr/pipeline_config.yaml
ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-genai-vllm-server:latest-offline
paddleocr genai_server --model_name PaddleOCR-VL-0.9B --host 0.0.0.0 --port 8118 --backend vllm --backend_config /home/paddleocr/pipeline_config.yaml
三、部署 paddleocr-vl-api
cat PaddleOCR-VL.yaml
注:输出信息如下:
pipeline_name: PaddleOCR-VL
batch_size: 64
use_queues: True
use_doc_preprocessor: False
use_layout_detection: True
use_chart_recognition: False
format_block_content: False
SubModules:
LayoutDetection:
module_name: layout_detection
model_name: PP-DocLayoutV2
model_dir: null
batch_size: 8
threshold:
0: 0.5 # abstract
1: 0.5 # algorithm
2: 0.5 # aside_text
3: 0.5 # chart
4: 0.5 # content
5: 0.4 # formula
6: 0.4 # doc_title
7: 0.5 # figure_title
8: 0.5 # footer
9: 0.5 # footer
10: 0.5 # footnote
11: 0.5 # formula_number
12: 0.5 # header
13: 0.5 # header
14: 0.5 # image
15: 0.4 # formula
16: 0.5 # number
17: 0.4 # paragraph_title
18: 0.5 # reference
19: 0.5 # reference_content
20: 0.45 # seal
21: 0.5 # table
22: 0.4 # text
23: 0.4 # text
24: 0.5 # vision_footnote
layout_nms: True
layout_unclip_ratio: [1.0, 1.0]
layout_merge_bboxes_mode:
0: "union" # abstract
1: "union" # algorithm
2: "union" # aside_text
3: "large" # chart
4: "union" # content
5: "large" # display_formula
6: "large" # doc_title
7: "union" # figure_title
8: "union" # footer
9: "union" # footer
10: "union" # footnote
11: "union" # formula_number
12: "union" # header
13: "union" # header
14: "union" # image
15: "large" # inline_formula
16: "union" # number
17: "large" # paragraph_title
18: "union" # reference
19: "union" # reference_content
20: "union" # seal
21: "union" # table
22: "union" # text
23: "union" # text
24: "union" # vision_footnote
VLRecognition:
module_name: vl_recognition
model_name: PaddleOCR-VL-0.9B
model_dir: null
batch_size: 2048
genai_config:
backend: vllm-server
server_url: http://10.0.12.252:8118/v1
SubPipelines:
DocPreprocessor:
pipeline_name: doc_preprocessor
batch_size: 8
use_doc_orientation_classify: True
use_doc_unwarping: True
SubModules:
DocOrientationClassify:
module_name: doc_text_orientation
model_name: PP-LCNet_x1_0_doc_ori
model_dir: null
batch_size: 8
DocUnwarping:
module_name: image_unwarping
model_name: UVDoc
model_dir: null
cat compose.yaml
注:输出信息如下:
services:
paddleocr-vl-api:
image: ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-vl:latest-offline
container_name: paddleocr-vl-api
ports:
- 8080:8080
volumes:
- /raid/aigc/ocr/paddleocr_vl/PaddleOCR-VL.yaml:/home/paddleocr/pipeline_config.yaml
- /raid/aigc/ocr/paddleocr_vl/vllm_config.yaml:/home/paddleocr/vllm_config.yaml
- /raid/aigc/ocr/img:/home/paddleocr/img
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0"]
capabilities: [gpu]
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/health || exit 1"]
docker compose up
注:输出信息如下:
[+] Running 1/1
✔ Container paddleocr-vl-api Created 0.0s
Attaching to paddleocr-vl-api
paddleocr-vl-api | Creating model: ('PP-DocLayoutV2', None)
paddleocr-vl-api | Model files already exist. Using cached files. To redownload, please delete the directory manually: `/home/paddleocr/.paddlex/official_models/PP-DocLayoutV2`.
paddleocr-vl-api | /usr/local/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
paddleocr-vl-api | warnings.warn(warning_message)
paddleocr-vl-api | Creating model: ('PaddleOCR-VL-0.9B', None)
paddleocr-vl-api | INFO: Started server process [1]
paddleocr-vl-api | INFO: Waiting for application startup.
paddleocr-vl-api | INFO: Application startup complete.
paddleocr-vl-api | INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
paddleocr-vl-api | Unhandled exception
paddleocr-vl-api | Traceback (most recent call last):
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
paddleocr-vl-api | await self.app(scope, receive, _send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
paddleocr-vl-api | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
paddleocr-vl-api | raise exc
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
paddleocr-vl-api | await app(scope, receive, sender)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
paddleocr-vl-api | await self.app(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 716, in __call__
paddleocr-vl-api | await self.middleware_stack(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 736, in app
paddleocr-vl-api | await route.handle(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 290, in handle
paddleocr-vl-api | await self.app(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 125, in app
paddleocr-vl-api | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
paddleocr-vl-api | raise exc
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
paddleocr-vl-api | await app(scope, receive, sender)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 111, in app
paddleocr-vl-api | response = await f(request)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 391, in app
paddleocr-vl-api | raw_response = await run_endpoint_function(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 290, in run_endpoint_function
paddleocr-vl-api | return await dependant.call(**values)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_pipeline_apps/paddleocr_vl.py", line 54, in _infer
paddleocr-vl-api | result = await pipeline.infer(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 104, in infer
paddleocr-vl-api | return await self.call(_infer, *args, **kwargs)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 111, in call
paddleocr-vl-api | return await fut
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 126, in _worker
paddleocr-vl-api | result = func(*args, **kwargs)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 95, in _infer
paddleocr-vl-api | for item in it:
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 129, in predict
paddleocr-vl-api | yield from self._pipeline.predict(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/paddleocr_vl/pipeline.py", line 673, in predict
paddleocr-vl-api | raise RuntimeError(
paddleocr-vl-api | RuntimeError: Exception from the 'vlm' worker: Connection error.
paddleocr-vl-api | INFO: 1.1.1.1:6004 - "POST /layout-parsing HTTP/1.1" 500 Internal Server Error
paddleocr-vl-api | ERROR: Exception in ASGI application
paddleocr-vl-api | Traceback (most recent call last):
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
paddleocr-vl-api | result = await app( # type: ignore[func-returns-value]
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
paddleocr-vl-api | return await self.app(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1134, in __call__
paddleocr-vl-api | await super().__call__(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
paddleocr-vl-api | await self.middleware_stack(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
paddleocr-vl-api | raise exc
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
paddleocr-vl-api | await self.app(scope, receive, _send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 63, in __call__
paddleocr-vl-api | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
paddleocr-vl-api | raise exc
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
paddleocr-vl-api | await app(scope, receive, sender)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
paddleocr-vl-api | await self.app(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 716, in __call__
paddleocr-vl-api | await self.middleware_stack(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 736, in app
paddleocr-vl-api | await route.handle(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 290, in handle
paddleocr-vl-api | await self.app(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 125, in app
paddleocr-vl-api | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
paddleocr-vl-api | raise exc
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
paddleocr-vl-api | await app(scope, receive, sender)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 111, in app
paddleocr-vl-api | response = await f(request)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 391, in app
paddleocr-vl-api | raw_response = await run_endpoint_function(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 290, in run_endpoint_function
paddleocr-vl-api | return await dependant.call(**values)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_pipeline_apps/paddleocr_vl.py", line 54, in _infer
paddleocr-vl-api | result = await pipeline.infer(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 104, in infer
paddleocr-vl-api | return await self.call(_infer, *args, **kwargs)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 111, in call
paddleocr-vl-api | return await fut
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 126, in _worker
paddleocr-vl-api | result = func(*args, **kwargs)
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/serving/basic_serving/_app.py", line 95, in _infer
paddleocr-vl-api | for item in it:
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 129, in predict
paddleocr-vl-api | yield from self._pipeline.predict(
paddleocr-vl-api | File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/paddleocr_vl/pipeline.py", line 673, in predict
paddleocr-vl-api | raise RuntimeError(
paddleocr-vl-api | RuntimeError: Exception from the 'vlm' worker: Connection error.
附加信息:调用 【POST /layout-parsing】接口,产生了上述报错信息。
Request body:
{
"file": "iVBORw0KGgoAAAANSUhEUgAAANoAAAAfCAYAAACBONPcAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAXSSURBVHhe7ZiLjeM4DIavgXQTYPsJMP0MpoXrYYAUs0A68YkUJVN8WXY8HuDADxBuE8sUH/+v7O0/S5IkP04aLUkuII2WJBeQRkuSC0ijJckFpNGS5ALSaElyAWm0JLmANFqSXMCPG+35cV++/tIHyfdjud0ey5M+Jv8/cv6VeaP9/VruN9Y0bFLQROL1eV9uH14rX8vXn9vy+KaPO8C4E+d3ZP6XUuu8f77oMyd6JsAabr8rTpi7OU97lqfM3zDk88N4D/rz56tEHala8Xr2XB4X6GLaaJgsKwIKnRLHViHu4AK64MqafFfm34Zca3AGjgOmc9i6fz5xv/UMlhZOZKboGeWtYr+WV+mnfIZLzEg9n13UVzxj6DHM069Rf3/C/A2j6YuT+lhijZ2U59d9Zs1t9R7WWs094dKmnjSaEAMXurO4cLrIHeFaSw+sQcXjcLyhSywRODWZQ7fet4B9lqjEWQPRs/JUCX1FPut9ps+7wPq1QMzzYY7qHL9Hb8/fMlqB5+Zd/Ov3NT/dy+giiJ45OH2cM5p42fzZ7lgNh4T14fuhZvEh4xA2mhEIYxyOY1yreWZDvTqtsxrRs/LUEjohn3VB0+dd7DGamXP9ztbFsfnj2cKAUN/X5i816QHmTrnXWCWHSbPX2lajTff2HaPhX0HaIVK0MjAWsr+pAJ7jiKoWzfLg4Jme2aoArLjxeSum2KBumYvsTYdyEMMM1yAQ65kWuy0G6pu1ojkSZu0A1Dp8HxltDm8etQeepvxzMR6rdy632i+5V8byl53nttFwACUADUX9K5IYEG8WT64n7hrRLhBpObAhYGwuFIxrvE/fWwOsMe41tlirgNdbbcAwlStKEoP3qxURCl32sfUJVn/H6avMf6/RFLpP58y/XVLWO8C2wbkmKxSzfcd6YZv6+PwaG0ZrRZYlRNVxBrSiG1GLGb9TxiGsvYC9vw5rbSo1qJgJv8MhUz1sbQ5p2F9r1QLUda5Ez2JMoWPPvV9wye8ZrfLe/IfLo8V35tgXj+Mam53Ze1F7pQ3lXQLzhEbDhpQEnvTfcSh0MBuQ3SxHZNQsLAr/rIfkNr8QPauDfCz/kkhcsQRDQCjHvofV2sTirbHek41G8axzYY1Cuchobi+Pzx+A8/GihNhmjlFvrdq1mTZrJL3LPnvLyiU0WvtrIibChjJ85sVTQuNBQSNYAfoWiYmM1nh+1LzcRuL59oABOONRYlhG87GGGxujLl9oai/UYuYCZ8s41xjN3/fO/Kmez3UGcM6svnTv1t6gfijf2Rq3cXpdmPrHEEykD0UEEwPSBggaTbdaXVsCHpkxWmNLBKHJ+U09YzTcY4ndNhJivlPZkzsXz8r5RnsV4U+ZGTk+/342n4EiiG+AMfuZwTJ7voU/591GUwKXA8LPvHC7ERin7KtCqXu8W93iHKNNxLGMNgiEr5L/t2EaR8QdfL7XaAX+nivGagIz391Gq7HkxdRmaZ9/fP79H954bW7vxZo1So+n+7/WNbGgl8Ec9xkNA4mmGQPCBHuhY6MxFiYnE2rN1kOx2DQIY1qsRK8XPvAhO2Ic2IpnYbzT6LkPAlv3+v1sRL82DKe2dn49R59Rv4f3aH6qzhPmz2egGONbrGeKfUNc+xLZRaCPA391FGyIr98KTCxRU0aT+pxmtIKMNeQwCByWXyuwCq+xLQTPaL13sJz8+Z63ROKIucc3zq/PeN7068n2Du8fnb+TW8XvbzOY7gvl6VwKs7qS+D55nmC0oAnrIN4rwAJjT8bbMtrYePELwOtzLpUuJlp8sGHvGo7RADd3Em1/hjHgfE+QEpqJk3dF9KLRzrLqomcQ67T5HzSaptUc98g0u6T3e1xmHmXvQaORMGl5N6n8n2YpyHBtFHqu0QAmvKOCUECfbAMN4NBmjcYvBQNpwJPBfDZqgj2gidPmf5rRfgfox5TRkiR5jzRaklxAGi1JLiCNliQXkEZLkgtIoyXJj7Ms/wEi1q0xq1N5LwAAAABJRU5ErkJggg==",
"fileType": 1
}
附加说明:在paddleocr-vl-api容器中,执行下述命令(paddleocr doc_parser...) 能够正常返回识别结果。(也就是说 paddleocr-genai-vllm-server 没问题,paddleocr-vl-api 有问题)
paddleocr@e9435bb1748e:~$ paddleocr doc_parser --input /home/paddleocr/img/ocr_small.png --vl_rec_backend vllm-server --vl_rec_server_url http://10.0.12.252:8118/v1
/usr/local/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
Creating model: ('PP-DocLayoutV2', None)
Model files already exist. Using cached files. To redownload, please delete the directory manually: `/home/paddleocr/.paddlex/official_models/PP-DocLayoutV2`.
Creating model: ('PaddleOCR-VL-0.9B', None)
[2025/11/11 06:31:54] paddleocr INFO: Processed item 0 in 56448.272705078125 ms
{'res': {'input_path': '/home/paddleocr/img/ocr_small.png', 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_layout_detection': True, 'use_chart_recognition': False, 'format_block_content': False}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 17, 'label': 'paragraph_title', 'score': 0.9313395619392395, 'coordinate': [np.float32(6.630951), np.float32(8.1459465), np.float32(214.58302), np.float32(25.981472)]}]}, 'parsing_res_list': [{'block_label': 'paragraph_title', 'block_content': '你这个代码片段可以这样续写', 'block_bbox': [6, 8, 214, 25]}]}}
🏃♂️ Environment (运行环境)
OS Ubuntu-22.04.1 GPU NVIDIA GeForce RTX 2080 Ti Driver 580.105.08 CUDA 13.0
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
import os import base64 import requests import pathlib
API_URL = "http://10.0.12.252:8080/layout-parsing" image_path = "/raid/aigc/ocr/img/ocr_small.png" with open(image_path, "rb") as file: image_bytes = file.read() image_data = base64.b64encode(image_bytes).decode("ascii")
payload = { "file": image_data, # Base64编码的文件内容或者文件URL "fileType": 1, # 文件类型,1表示图像文件 } response = requests.post(API_URL, json=payload) assert response.status_code == 200
我也遇到了同一个错误
docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-vl:latest-offline 你只需要更新一下最新镜像,然后rm一下你的docker-compose,再启动一下就行了
docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-vl:latest-offline 你只需要更新一下最新镜像,然后rm一下你的docker-compose,再启动一下就行了
但是 我的镜像已经是最新的了
@wssunjiale 观察到您是手动启动了vllm-server容器,然后docker compose启动了paddleocr-vl-api容器,所以您在 compose.yaml 中需要加一个下面的选项,才能与容器外正常通信
network: host
@wssunjiale 观察到您是手动启动了vllm-server容器,然后docker compose启动了paddleocr-vl-api容器,所以您在 compose.yaml 中需要加一个下面的选项,才能与容器外正常通信
network: host
可是我在docker compose启动的paddleocr-vl-api容器中 通过命令行 是可以收到正确回复的呀???
好了吗?怎么解决的,我刚下的,也报这个错。 头疼啊。
@wssunjiale 观察到您是手动启动了vllm-server容器,然后docker compose启动了paddleocr-vl-api容器,所以您在 compose.yaml 中需要加一个下面的选项,才能与容器外正常通信
network: host可是我在docker compose启动的paddleocr-vl-api容器中 通过命令行 是可以收到正确回复的呀???
如果是手动进入paddleocr-vl-api容器,然后启动服务呢(而不是调用CLI)?
@wssunjiale 观察到您是手动启动了vllm-server容器,然后docker compose启动了paddleocr-vl-api容器,所以您在 compose.yaml 中需要加一个下面的选项,才能与容器外正常通信
network: host可是我在docker compose启动的paddleocr-vl-api容器中 通过命令行 是可以收到正确回复的呀???
如果是手动进入
paddleocr-vl-api容器,然后启动服务呢(而不是调用CLI)?
在容器中执行【paddlex --serve --pipeline /home/paddleocr/pipeline_config_vllm.yaml --port 8081】,在外面调用,会报同样的错误
如果paddleocr doc_parser可以执行成功,paddlex --serve不行,说明配置文件/home/paddleocr/pipeline_config_vllm.yaml中的设置有问题。请在容器中查看/home/paddleocr/pipeline_config_vllm.yaml的server_url设置是否正确。
