agentic icon indicating copy to clipboard operation
agentic copied to clipboard

Return error code (e.g. "message_length_exceeds_limit") from sendMessage

Open emipc opened this issue 2 years ago • 6 comments

Describe the feature

I'd like to get the error code that ChatGPT responds with when the message you submitted is too long. Unfortunately, this in only logged in the console but it's not as part of the error being thrown from sendMessage, so we can't use it.

I think that'd be very useful in order to have a proper error handling in our end.

This is the error that is being logged in the console:

ChatGPT "[email protected]" error 413; {
  message: 'The message you submitted was too long, please reload the conversation and submit something shorter.',
  code: 'message_length_exceeds_limit'
}

emipc avatar Jan 19 '23 14:01 emipc

Hey @emipc, thanks for opening an issue 😄

Looks like we just need to include this extra detail info in the error being thrown, right?

Also, I would recommend capping the input length on your end before even calling sendMessage to prevent abuse.

transitive-bullshit avatar Jan 22 '23 07:01 transitive-bullshit

I think message length limit is 4097.

https://chatgpttalk.com/

adminha avatar Jan 24 '23 10:01 adminha

@adminha is that characters or bytes?

MichaelVandi avatar Jan 24 '23 10:01 MichaelVandi

Hey @emipc, thanks for opening an issue 😄

Looks like we just need to include this extra detail info in the error being thrown, right?

Also, I would recommend capping the input length on your end before even calling sendMessage to prevent abuse.

Hey! I guess that's all we need to do, yep.

I'm already handling it before sending the request, but I think it depends on the language you are using and checking the length from a message might not work as expected because of the tokens: https://beta.openai.com/tokenizer

If I understood it correctly, it'd be better to encode the string and count the tokens,

Anyway, I found it a bit confusing to have the error code on the console but not being able to use it, so I think if we are able to work with them, it'd allow us to better handle different error cases.

emipc avatar Jan 24 '23 15:01 emipc

@emipc @transitive-bullshit @adminha is the 4097 limit based on tokens, characters or bytes?

MichaelVandi avatar Jan 28 '23 14:01 MichaelVandi

@MichaelVandi the old limit and the new limit are different.

v3 uses the chatgpt webapp.

v4 uses the official openai api w/ a chat model that was released in stealth. It has stricter token limits.

Afaik all limits are based on tokens. You can use https://github.com/latitudegames/GPT-3-Encoder to get an estimate of the token count.

transitive-bullshit avatar Feb 01 '23 19:02 transitive-bullshit

This should no longer be relevant to the v4 ChatGPTAPI or ChatGPTUnofficialProxyAPI.

There will always be context limits (4096 tokens), which we try to enforce. If you're running into these limits, add debug: true to your constructor to try and debug what's going on under the hood.

transitive-bullshit avatar Feb 19 '23 10:02 transitive-bullshit