ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

[Question]: How to use Mineru API with backend mode "vlm-vllm-async-engine"

Open leecj opened this issue 1 month ago • 2 comments

Self Checks

  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (Language Policy).
  • [x] Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) (Language Policy).
  • [x] Please do not modify this template :) and fill in all the required fields.

Describe your problem

I'm trying to integrate the Mineru API into my project, and the backend mode is configured as vlm-vllm-async-engine. However, I couldn't find detailed documentation or examples for this specific mode. Additionally, I deployed the Mineru service using the following Docker command:

  mineru-api:
    image: mineru-vllm:latest
    container_name: mineru-api
    restart: always
    profiles: ["api"]
    ports:
      - 8000:8000
    environment:
      MINERU_MODEL_SOURCE: local
    entrypoint: mineru-api
    command:
      --host 0.0.0.0
      --port 8000
      --gpu-memory-utilization 0.8
      # parameters for vllm-engine
      # --data-parallel-size 2  # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
      # --gpu-memory-utilization 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
    ulimits:
      memlock: -1
      stack: 67108864
    ipc: host
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: [ "7" ]
              capabilities: [ gpu ]

leecj avatar Nov 13 '25 09:11 leecj

Hi @leecj! I'm Dosu and I’m helping the ragflow team.

To use Mineru API with the backend mode "vlm-vllm-async-engine" (officially named "vlm-vllm-engine" in recent documentation), set the environment variable MINERU_BACKEND=vlm-vllm-engine in your deployment. This mode uses a local vLLM engine and requires a local GPU. Your Docker setup looks correct for this purpose—just ensure you set the backend explicitly:

environment:
  MINERU_MODEL_SOURCE: local
  MINERU_BACKEND: vlm-vllm-engine

Other useful environment variables include MINERU_EXECUTABLE (path to Mineru binary), MINERU_DELETE_OUTPUT (set to 0 to keep output files), and MINERU_OUTPUT_DIR (custom output directory). For multi-GPU or VRAM management, tune the command-line flags as needed (e.g., --data-parallel-size, --gpu-memory-utilization). See the official docs for more details on these options and deployment best practices:
Docs: Mineru backend modes and environment variables
Recent documentation update

If you want to use a remote vLLM server instead, switch to the "vlm-http-client" backend and set MINERU_SERVER_URL to the remote server's address.

Let me know if you need a step-by-step guide for a specific deployment scenario!

To reply, just mention @dosu.


How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other  Chat with Dosu Join Discord Share on X

dosubot[bot] avatar Nov 13 '25 09:11 dosubot[bot]

vlm-vllm-engine is meant for OpenAI-compatible vLLM vision/OCR services and only accepts images. It does not rasterize PDFs or run layout/table reconstruction itself. If you pass a PDF directly, you’ll get empty output or errors. To use it:

  1. Rasterize the PDF pages to images first, then call mineru with -b vlm-vllm-engine -u http://:.
  2. If you need “PDF in, structured text out” without manual rasterization, use pipeline or vlm-http-client backends instead (they handle PDF→image internally).

MinerU acts as a thin client here: it just forwards the image to the vLLM server and returns the model’s text; any layout/table fidelity depends on the model, not on MinerU.

xrwang8 avatar Nov 27 '25 14:11 xrwang8