crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG] output_json not working with custom_openai

Open WoBuGs opened this issue 8 months ago • 4 comments
trafficstars

Description

I have a setup with Ollama and Open-WebUI. Agents do some tasks, and then output a json file.

I ensure that the output is valid by using the output_json parameter in my task definition:

    @task
    def my_task(self) -> Task:
        return Task(
            config=self.tasks_config['my_task'],
            tools=[],
            output_file="outputs/task_output.json",
            output_json=TaskOutput
        )

So far I used Ollama as model provider in my crews. Now I want to move everything to the OpenAI-compatible API from Open-WebUI, to handle users, API keys, etc.

For testing, I have the following two ini files:

MODEL=ollama/granite3.2:8b
BASE_URL=http://<ollama url>:11434

and

OPENAI_MODEL_NAME=custom_openai/granite3.2:8b
OPENAI_API_BASE=https://<openwebui url>/ollama/v1
OPENAI_API_KEY=<API key>

The crew runs for both, BUT for the second one, I get the following error at the end of the exection:

 Failed to convert text into JSON, error: Instructor does not support multiple tool calls, use List[Model] instead. Using raw output instead.

The outputed JSON file is not valid, and this last steps hangs for a few minutes.

Steps to Reproduce

  1. Set up a task with the output_json parameter.
  2. Run the task using ollama as backend.
  3. Run the task using openwebui (OpenAI-compatible API) as backend.

Expected behavior

Valid JSON and not error.

Screenshots/Code snippets

See description.

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

0.102.0

crewAI Tools Version

0.36.0

Virtual Environment

Venv

Evidence

See description.

Possible Solution

None

Additional context

None

WoBuGs avatar Mar 05 '25 00:03 WoBuGs

I am not sure on this. I think the issue is LiteLLM which is called internally. Found this on reddit : https://www.reddit.com/r/selfhosted/comments/1iof274/anyone_here_running_openwebui_and_litellm/

Vidit-Ostwal avatar Mar 05 '25 16:03 Vidit-Ostwal

Hey there,

This GitHub comment might provide relevant insights and potential workarounds:
https://github.com/instructor-ai/instructor/issues/1111#issuecomment-2431203489

Programmer-RD-AI avatar Mar 13 '25 04:03 Programmer-RD-AI

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Apr 12 '25 12:04 github-actions[bot]

@WoBuGs can you share your tasks and agents config?

lucasgomide avatar Apr 14 '25 20:04 lucasgomide

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar May 15 '25 12:05 github-actions[bot]

Sorry for the delay. I don't have this configuration anymore :/

But ultimatly, I think the issue is/was more related to Open-WebUI and how it exposes an OpenAI compatible API.

WoBuGs avatar May 18 '25 14:05 WoBuGs