crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

OpenAI GPT-3.5-turbo not supported?

Open williamgurzoni opened this issue 1 year ago • 2 comments

I'm trying to use GPT-3.5-turbo, but receiving this error message:

raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}

I'm importing OpenAi from langchain_openai.

It seems that v1 of OpenAI python package had some breacking changes, I'm wondering if it's related. Thank you.

williamgurzoni avatar Jan 29 '24 09:01 williamgurzoni

hey @williamgurzoni it should just work, can you share some of code around the issue? Here is some examples on conneting with multiple LLMs: https://joaomdmoura.github.io/crewAI/how-to/LLM-Connections/

joaomdmoura avatar Jan 29 '24 12:01 joaomdmoura

@williamgurzoni I dont know if this solution will be suited to you, but you can set the llm like this:

llm = ChatOpenAI(openai_api_base=os.environ.get("OPENAI_API_BASE_URL"),
                 openai_api_key=os.environ.get("OPENAI_API_KEY"),
                 model_name=os.environ.get("MODEL_NAME")
                 )

and have in your .envfile the following configs:

OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE_URL="https://api.openai.com/v1"
MODEL_NAME ="gpt-3.5-turbo"

Dont forget to import the dotenv (pip install python-dotenv) package (and to invoke in the code using load_dotenv())

from langchain.chat_models.openai import ChatOpenAI
from dotenv import load_dotenv

load_dotenv()

llm = ChatOpenAI(openai_api_base=os.environ.get("OPENAI_API_BASE_URL"),
                 openai_api_key=os.environ.get("OPENAI_API_KEY"),
                 model_name=os.environ.get("MODEL_NAME")

This works for me, hope that helps!

p.s: The link that @joaomdmoura shared helpped me to get into this solution, thx

rmgravina avatar Jan 29 '24 21:01 rmgravina

Perfect! That makes total sense @rmgravina. Thank you very much you both for adding so much info here! I was able to get it working with your example.

I think I got stuck because of the example in the Readme:

  ...
  # llm=OpenAI(model_name="gpt-3.5", temperature=0.7)
  # For the OpenAI model you would need to import
  # from langchain_openai import OpenAI
  ...

I was trying to use OpenAI instead of ChatOpenAI. As soon as I learn more about the project I'll be happy to contribute to a PR.

Great project guys, thanks.

williamgurzoni avatar Jan 30 '24 07:01 williamgurzoni