output_json bug with groq (also with openai wrapper)
Hi, i create pydantic model with fields to fill in. I attach it to my task as output_json and with openai models its work well but with groq i get error. This is how i create llm as i understand this is openai wrapper: llm = ChatOpenAI( temperature=0, openai_api_base="https://api.groq.com/openai/v1", openai_api_key=os.getenv("GROQ_API_KEY"), model_name="mixtral-8x7b-32768"
) and i create a task that have pydantic model and this work with openai but don't with groq (mixtral in my case)
File "/python/3_11/venv/lib/python3.11/site-packages/crewai/crew.py", line 204, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/crewai/crew.py", line 240, in _run_sequential_process
output = task.execute(context=task_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/crewai/task.py", line 148, in execute
result = self._execute(
^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/crewai/task.py", line 163, in _execute
exported_output = self._export_output(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "python3.11/site-packages/crewai/task.py", line 213, in _export_output
model_schema = PydanticSchemaParser(model=model).get_schema()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 16, in get_schema
return self._get_model_schema(self.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 21, in _get_model_schema
field_type_str = self._get_field_type(field, depth + 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/crewai/utilities/pydantic_schema_parser.py", line 37, in _get_field_type
elif issubclass(field_type, BaseModel):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "
Thank you!!!
Did you find any solution? I am facing the exactly same problem
I found a way to do it with a langchain, but the method is only in the documentation and when it is called, it throws an exception that it is not implemented.
json output has been implemented in langchain, but it is called with invoke method. Can we count on implementation of this method or suggestion some temporary solution? https://python.langchain.com/docs/modules/model_io/chat/structured_output/
I'd be open to contributing to this.
Could you try defining the llm like this instead? Just want to double check if this could indica another bug
pip install langchain-groq
from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
https://python.langchain.com/docs/modules/model_io/chat/structured_output/
Looks interesting, I think we could try it for sure
Could you try defining the llm like this instead? Just want to double check if this could indica another bug
pip install langchain-groqfrom langchain_groq import ChatGroq llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
of course!!! if it worked that way I wouldn't have waited so long for an answer. It doesn't work and groq has to add a json parameter to the request (it's in the documentation) and in the end the implementation is not correct Here is groq doc part: JSON mode (beta) JSON mode is a beta feature that guarantees all chat completions are valid JSON.
Usage:
Set "response_format": {"type": "json_object"} in your chat completion request Add a description of the desired JSON structure within the system prompt (see below for example system prompts) Recommendations for best beta results:
Mixtral performs best at generating JSON, followed by Gemma, then Llama Use pretty-printed JSON instead of compact JSON Keep prompts concise Beta Limitations:
Does not support streaming Does not support stop sequences Error Code:
Groq will return a 400 error with an error code of json_validate_failed if JSON generation fails.
Could you try defining the llm like this instead? Just want to double check if this could indica another bug
pip install langchain-groqfrom langchain_groq import ChatGroq llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
It works for me, here is my setup:
from langchain_groq import ChatGroq
from langchain_openai import ChatOpenAI
MODEL_NAME = 'mixtral-8x7b-32768'
#llm = ChatGroq(temperature=0.9, model_name = MODEL_NAME, api_key=os.environ["GROQ_API_KEY"])
# if the previous instruction doesn't work try this one:
os.environ["OPENAI_API_KEY"] = GROQ_API_KEY
os.environ["OPENAI_API_BASE"] = 'https://api.groq.com/openai/v1'
os.environ["OPENAI_MODEL_NAME"] = MODEL_NAME # Adjust based on available model
llm = ChatOpenAI(model = MODEL_NAME, base_url = os.environ["OPENAI_API_BASE"], api_key=os.environ["OPENAI_API_KEY"])
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
storyteller = Agent(
role='Story teller for kids',
goal='Write a nice educational story for kids.',
backstory='You are a successfully story teller and world class ebooks best seller.',
verbose=True,
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
llm=llm
)
# Create tasks for your agents
task1 = Task(
description="Write a short story.",
expected_output="One paragraph of text.",
agent=storyteller
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[storyteller],
tasks=[task1],
verbose=2, # You can set it to 1 or 2 to different logging levels
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
Tested on colab.
Could you try defining the llm like this instead? Just want to double check if this could indica another bug
pip install langchain-groqfrom langchain_groq import ChatGroq llm = ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")It works for me, here is my setup:
from langchain_groq import ChatGroq from langchain_openai import ChatOpenAI MODEL_NAME = 'mixtral-8x7b-32768' #llm = ChatGroq(temperature=0.9, model_name = MODEL_NAME, api_key=os.environ["GROQ_API_KEY"]) # if the previous instruction doesn't work try this one: os.environ["OPENAI_API_KEY"] = GROQ_API_KEY os.environ["OPENAI_API_BASE"] = 'https://api.groq.com/openai/v1' os.environ["OPENAI_MODEL_NAME"] = MODEL_NAME # Adjust based on available model llm = ChatOpenAI(model = MODEL_NAME, base_url = os.environ["OPENAI_API_BASE"], api_key=os.environ["OPENAI_API_KEY"]) from crewai import Agent, Task, Crew # Define your agents with roles and goals storyteller = Agent( role='Story teller for kids', goal='Write a nice educational story for kids.', backstory='You are a successfully story teller and world class ebooks best seller.', verbose=True, allow_delegation=False, # You can pass an optional llm attribute specifying what model you wanna use. # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7), llm=llm ) # Create tasks for your agents task1 = Task( description="Write a short story.", expected_output="One paragraph of text.", agent=storyteller ) # Instantiate your crew with a sequential process crew = Crew( agents=[storyteller], tasks=[task1], verbose=2, # You can set it to 1 or 2 to different logging levels ) # Get your crew to work! result = crew.kickoff() print("######################") print(result)Tested on colab.
Try to understand what the topic is before you write a response.
Try to understand what the topic is before you write a response.
I apologize if my comment was off-topic or unhelpful for your specific issue, I thought that would be helpful for someone else. If you'd prefer, I'm ready to delete my comment if that makes you happy. Thank!
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.