[BUG: Output] use_llm OpenAI inference failed: Error code: 400
📝 Describe the Output Issue
I used the following command for the conversion, but encountered an error during the processing of the LLM. What could be the issue?
marker_single 2022.pdf --output_dir ./ --output_format markdown --disable_image_extraction --use_llm --llm_service=marker.services.openai.OpenAIService --redo_inline_math --openai_model qwen2.5 --openai_base_url http://0.0.0.0:9997/v1 --openai_api_key None
📄 Input Document
Attach the PDF or input file used.
📤 Current Output
Paste the Markdown or HTML that Marker generated:
025-09-03 15:28:13,219 [ERROR] marker: OpenAI inference failed: Error code: 400 - {'object': 'error', 'message': '[{\'type\': \'literal_error\', \'loc\': (\'body\', \'response_format\', \'type\'), \'msg\': "Input should be \'text\' or \'json_object\'", \'input\': \'json_schema\', \'ctx\': {\'expected\': "\'text\' or \'json_object\'"}}, {\'type\': \'extra_forbidden\', \'loc\': (\'body\', \'response_format\', \'json_schema\'), \'msg\': \'Extra inputs are not permitted\', \'input\': {\'schema\': {\'properties\': {\'comparison\': {\'title\': \'Comparison\', \'type\': \'string\'}, \'corrected_html\': {\'title\': \'Corrected Html\', \'type\': \'string\'}}, \'required\': [\'comparison\', \'corrected_html\'], \'title\': \'TableSchema\', \'type\': \'object\', \'additionalProperties\': False}, \'name\': \'TableSchema\', \'strict\': True}}]', 'type': 'BadRequestError', 'param': None, 'code': 400}
LLMTableProcessor running: 12%
✅ Expected Output
Describe or paste what you expected Marker to generate.
⚙️ Environment
Please fill in all relevant details:
- Marker version:1.9.1
- Surya version:0.16.1
- Python version:3.12
- PyTorch version:2.8.0
- Transformers version:4.53.3
- Operating System:ubuntu 22.04
📟 Command or Code Used
Paste the exact bash command or Python code you used to run Marker:
Click to expand
marker_single 2022.pdf --output_dir ./ --output_format markdown --disable_image_extraction --use_llm --llm_service=marker.services.openai.OpenAIService --redo_inline_math --openai_model qwen2.5 --openai_base_url http://0.0.0.0:9997/v1 --openai_api_key None
📎 Additional Context
Any other relevant info, configs, or assumptions.
please can you provide code with llm
please can you provide code with llm
CUDA_VISIBLE_DEVICES=3 nohup vllm serve /home/suny/.cache/modelscope/hub/qwen/Qwen2___5-72B-Instruct-AWQ --served-model-name qwen2.5 --host 0.0.0.0 --port 9997 --max-model-len 32768 --gpu-memory-utilization 0.75 --enforce-eager
i have to use in google colab can you provide full code with llm groq