openai-python icon indicating copy to clipboard operation
openai-python copied to clipboard

Added helpful error message telling user to use ChatCompletion

Open Filimoa opened this issue 2 years ago • 1 comments

Problem

Many users (including me) are trying to call the "gpt-3.5-turbo" from the Completion class. See #250.

response = openai.Completion.create(
    model="gpt-3.5-turbo",
    prompt="Who won the world series in 2020?",
)

Which returns a confusing error message, telling the user to use a different endpoint.

    677         stream_error = stream and "error" in resp.data
    678         if stream_error or not 200 <= rcode < 300:
--> 679             raise self.handle_error_response(
    680                 rbody, rcode, resp.data, rheaders, stream_error=stream_error
    681             )

InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Proposal

Given this is a library that wraps the API, it'd be more helpful to tell the user to use the ChatCompletion class instead. My pull requests wraps the error and displays the following.

        stream_error = stream and "error" in resp.data
        if stream_error or not 200 <= rcode < 300:
>           raise self.handle_error_response(
                rbody, rcode, resp.data, rheaders, stream_error=stream_error
            )
E           openai.error.InvalidRequestError: Please use ChatCompletion instead of Completion when using the'gpt-3.5-turbo' models. For more information see https://platform.openai.com/docs/guides/chat/introduction.

Approach

My initial thought was to add some sort of kwarg checking in Completion but it looks like this is a lean wrapper class. Searching further it looks like handle_error_response is a good spot to put this since it already does something similar for internal errors.

Caveats

This approach is fairly tightly coupled to the OpenAI API error response message, I'm not sure how stable these messages are. With that said if the error message does change, this will not affect any other functionality.

Filimoa avatar Mar 02 '23 17:03 Filimoa

Thanks for this! I know the team was also working on some ways to improve this, I will let other folks follow up on if this is something we want to do in the Python SDK.

logankilpatrick avatar Mar 03 '23 14:03 logankilpatrick

I see the idea but I'm a little nervous about the dependency on the api error message. I'd rather have this this covered in something like an FAQ (which might be a good idea to start at this point) or with better documentation in the README. That said, I get the appeal. Given that this won't be the only model that has this issue, can we make this error more generic?

hallacy avatar Apr 08 '23 01:04 hallacy

One option is for the API to return an error code for this case and we can depend on that to change the error message instead e.g. code: "chat_model_unsupported"

However, I'm leaning towards updating our README / quickstart to use chat completions by default instead. PR here: https://github.com/openai/openai-python/pull/441. Hopefully that will reduce the confusion for new users. Open to reconsidering if this continues to be a problem!

athyuttamre avatar May 08 '23 18:05 athyuttamre