crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

output_json bug with groq (also with openai wrapper)

Open xitex opened this issue 1 year ago • 7 comments

Hi, i create pydantic model with fields to fill in. I attach it to my task as output_json and with openai models its work well but with groq i get error. This is how i create llm as i understand this is openai wrapper: llm = ChatOpenAI( temperature=0, openai_api_base="https://api.groq.com/openai/v1", openai_api_key=os.getenv("GROQ_API_KEY"), model_name="mixtral-8x7b-32768"

) and i create a task that have pydantic model and this work with openai but don't with groq (mixtral in my case)

File "/python/3_11/venv/lib/python3.11/site-packages/crewai/crew.py", line 204, in kickoff result = self._run_sequential_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/3_11/venv/lib/python3.11/site-packages/crewai/crew.py", line 240, in _run_sequential_process output = task.execute(context=task_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/3_11/venv/lib/python3.11/site-packages/crewai/task.py", line 148, in execute result = self._execute( ^^^^^^^^^^^^^^ File "/python/3_11/venv/lib/python3.11/site-packages/crewai/task.py", line 163, in _execute exported_output = self._export_output(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "python3.11/site-packages/crewai/task.py", line 213, in _export_output model_schema = PydanticSchemaParser(model=model).get_schema() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 16, in get_schema return self._get_model_schema(self.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 21, in _get_model_schema field_type_str = self._get_field_type(field, depth + 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 37, in _get_field_type elif issubclass(field_type, BaseModel): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 123, in subclasscheck TypeError: issubclass() arg 1 must be a class

Thank you!!!

xitex avatar Mar 29 '24 08:03 xitex

Did you find any solution? I am facing the exactly same problem

jiveshkalra avatar Apr 09 '24 20:04 jiveshkalra

I found a way to do it with a langchain, but the method is only in the documentation and when it is called, it throws an exception that it is not implemented.

xitex avatar Apr 10 '24 07:04 xitex

json output has been implemented in langchain, but it is called with invoke method. Can we count on implementation of this method or suggestion some temporary solution? https://python.langchain.com/docs/modules/model_io/chat/structured_output/

xitex avatar Apr 15 '24 13:04 xitex

I'd be open to contributing to this.

johnsutor avatar May 03 '24 01:05 johnsutor

Could you try defining the llm like this instead? Just want to double check if this could indica another bug

pip install langchain-groq

from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")

joaomdmoura avatar May 03 '24 03:05 joaomdmoura

https://python.langchain.com/docs/modules/model_io/chat/structured_output/

Looks interesting, I think we could try it for sure

joaomdmoura avatar May 03 '24 03:05 joaomdmoura

Could you try defining the llm like this instead? Just want to double check if this could indica another bug

pip install langchain-groq

from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")

of course!!! if it worked that way I wouldn't have waited so long for an answer. It doesn't work and groq has to add a json parameter to the request (it's in the documentation) and in the end the implementation is not correct Here is groq doc part: JSON mode (beta) JSON mode is a beta feature that guarantees all chat completions are valid JSON.

Usage:

Set "response_format": {"type": "json_object"} in your chat completion request Add a description of the desired JSON structure within the system prompt (see below for example system prompts) Recommendations for best beta results:

Mixtral performs best at generating JSON, followed by Gemma, then Llama Use pretty-printed JSON instead of compact JSON Keep prompts concise Beta Limitations:

Does not support streaming Does not support stop sequences Error Code:

Groq will return a 400 error with an error code of json_validate_failed if JSON generation fails.

xitex avatar May 20 '24 09:05 xitex

Could you try defining the llm like this instead? Just want to double check if this could indica another bug

pip install langchain-groq

from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")

It works for me, here is my setup:

from langchain_groq import ChatGroq
from langchain_openai import ChatOpenAI

MODEL_NAME = 'mixtral-8x7b-32768'

#llm = ChatGroq(temperature=0.9, model_name = MODEL_NAME, api_key=os.environ["GROQ_API_KEY"])

# if the previous instruction doesn't work try this one:
os.environ["OPENAI_API_KEY"] = GROQ_API_KEY
os.environ["OPENAI_API_BASE"] = 'https://api.groq.com/openai/v1'
os.environ["OPENAI_MODEL_NAME"] = MODEL_NAME  # Adjust based on available model
llm = ChatOpenAI(model = MODEL_NAME, base_url = os.environ["OPENAI_API_BASE"], api_key=os.environ["OPENAI_API_KEY"])

from crewai import Agent, Task, Crew

# Define your agents with roles and goals
storyteller = Agent(
  role='Story teller for kids',
  goal='Write a nice educational story for kids.',
  backstory='You are a successfully story teller and world class ebooks best seller.',
  verbose=True,
  allow_delegation=False,
  # You can pass an optional llm attribute specifying what model you wanna use.
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
  llm=llm
)

# Create tasks for your agents
task1 = Task(
  description="Write a short story.",
  expected_output="One paragraph of text.",
  agent=storyteller
)


# Instantiate your crew with a sequential process
crew = Crew(
  agents=[storyteller],
  tasks=[task1],
  verbose=2, # You can set it to 1 or 2 to different logging levels
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)

Tested on colab.

bitsnaps avatar May 26 '24 14:05 bitsnaps

Could you try defining the llm like this instead? Just want to double check if this could indica another bug pip install langchain-groq

from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")

It works for me, here is my setup:

from langchain_groq import ChatGroq
from langchain_openai import ChatOpenAI

MODEL_NAME = 'mixtral-8x7b-32768'

#llm = ChatGroq(temperature=0.9, model_name = MODEL_NAME, api_key=os.environ["GROQ_API_KEY"])

# if the previous instruction doesn't work try this one:
os.environ["OPENAI_API_KEY"] = GROQ_API_KEY
os.environ["OPENAI_API_BASE"] = 'https://api.groq.com/openai/v1'
os.environ["OPENAI_MODEL_NAME"] = MODEL_NAME  # Adjust based on available model
llm = ChatOpenAI(model = MODEL_NAME, base_url = os.environ["OPENAI_API_BASE"], api_key=os.environ["OPENAI_API_KEY"])

from crewai import Agent, Task, Crew

# Define your agents with roles and goals
storyteller = Agent(
  role='Story teller for kids',
  goal='Write a nice educational story for kids.',
  backstory='You are a successfully story teller and world class ebooks best seller.',
  verbose=True,
  allow_delegation=False,
  # You can pass an optional llm attribute specifying what model you wanna use.
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
  llm=llm
)

# Create tasks for your agents
task1 = Task(
  description="Write a short story.",
  expected_output="One paragraph of text.",
  agent=storyteller
)


# Instantiate your crew with a sequential process
crew = Crew(
  agents=[storyteller],
  tasks=[task1],
  verbose=2, # You can set it to 1 or 2 to different logging levels
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)

Tested on colab.

Try to understand what the topic is before you write a response.

xitex avatar Jun 14 '24 04:06 xitex

Try to understand what the topic is before you write a response.

I apologize if my comment was off-topic or unhelpful for your specific issue, I thought that would be helpful for someone else. If you'd prefer, I'm ready to delete my comment if that makes you happy. Thank!

bitsnaps avatar Jul 03 '24 10:07 bitsnaps

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Aug 19 '24 12:08 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Aug 25 '24 12:08 github-actions[bot]