JSONDecodeError - OpenAI API response is not json compliant
Hello, many thanks for the library
OpenAI sometimes returns response that is not json compliant
- Adding trailing comma
- Double quoting without backslashes
- Not closing bracket such xxx."\n}'
Tested with gpt-3.5-turbo-0613
When using pydantic model as output_schema (inspired from the example notebook), orjson.loads fails. Unfortunately, it's a bit unpredictable and cannot write a reproducible example. Any advice?
Many thanks
It would be cool to add a mechanism like this from langchain: https://python.langchain.com/docs/modules/model_io/output_parsers/output_fixing_parser
i'm encountering the same or a similar error, but i'm not using pydantic output schema, just plain prompts and a system prompt (programming related, the result includes correctly rendered python markdown). i believe this is the relevant part of the stack-trace:
for chunk in ai.stream(message, system=system_prompt):
File "/home/user/.local/lib/python3.10/site-packages/simpleaichat/chatgpt.py", line 160, in stream
chunk_dict = orjson.loads(chunk)
orjson.JSONDecodeError: Input is a zero-length, empty document: line 1 column 1 (char 0)
zero-length input should be caught by https://github.com/minimaxir/simpleaichat/blob/55b8324f8f4558c42e33ab6478446af6fd1773d7/simpleaichat/chatgpt.py#L157 though...
I ran into this today, too. It seems like the library fails to handle errors returned by the API (like a 429 Too Many Requests response). It definitely needs more robust error handling and error reporting... Otherwise, it's very confusing.