openai-python icon indicating copy to clipboard operation
openai-python copied to clipboard

Json parsing issue when using openai.Completion.acreate(stream=True)

Open nfcampos opened this issue 2 years ago • 2 comments

See the last line, looks like the stream iterator might be breaking up parts where it shouldn't? Unless it's an API issue. cc @ddeville

Traceback (most recent call last):
  File "/Users/nuno/Library/Caches/pypoetry/virtualenvs/platform-api-30l3Kv3P-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 670, in _interpret_response_line
    data = json.loads(rbody)
  File "/opt/homebrew/Cellar/[email protected]/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/opt/homebrew/Cellar/[email protected]/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/opt/homebrew/Cellar/[email protected]/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 191 (char 190)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/nuno/dev/concat/server-py/concat/conversation.py", line 87, in handle_conversation
    await run
  File "/Users/nuno/dev/concat/server-py/concat/agent/openai/openai.py", line 122, in __call__
    msg, _ = await asyncio.gather(
  File "/Users/nuno/dev/concat/server-py/concat/tools/multi_prompt.py", line 87, in __call__
    return await self.call(prompt["prompt"], examples, input, ctx)
  File "/Users/nuno/dev/concat/server-py/concat/tools/sql/__init__.py", line 59, in call
    async for key, msg in ctx.stream_multi_message_async(
  File "/Users/nuno/dev/concat/server-py/concat/context/context.py", line 419, in stream_multi_message_async
    async for text in iterator:
  File "/Users/nuno/dev/concat/server-py/concat/utils/openai.py", line 37, in openai_completion_stream
    async for chunk in cast(
  File "/Users/nuno/Library/Caches/pypoetry/virtualenvs/platform-api-30l3Kv3P-py3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 230, in <genexpr>
    return (
  File "/Users/nuno/Library/Caches/pypoetry/virtualenvs/platform-api-30l3Kv3P-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 320, in wrap_resp
    async for r in resp:
  File "/Users/nuno/Library/Caches/pypoetry/virtualenvs/platform-api-30l3Kv3P-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 635, in <genexpr>
    self._interpret_response_line(
  File "/Users/nuno/Library/Caches/pypoetry/virtualenvs/platform-api-30l3Kv3P-py3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 672, in _interpret_response_line
    raise error.APIError(
openai.error.APIError: HTTP code 200 from API ({"id": "cmpl-6YxjgLI6W6XwRj8hqhDZ6aCFz19Hp", "object": "text_completion", "created": 1673790796, "choices": [{"text": "sche", "index": 0, "logprobs": null, "finish_reason": null}], "model": ")

nfcampos avatar Jan 15 '23 13:01 nfcampos

Huh, that's weird. Any chance you have a command to recreate it?

hallacy avatar Jan 15 '23 16:01 hallacy

No, this was only once, out of many similar requests. The same request (or as close to same as I could) repeated after didn’t trigger this error.

On Sun, Jan 15 2023 at 4:31 pm, hallacy @.***> wrote:

Huh, that's weird. Any chance you have a command to recreate it?

— Reply to this email directly, view it on GitHub https://github.com/openai/openai-python/issues/184#issuecomment-1383193806, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAN4RSOMLYQSZF5QSWDSN3WSQQ4LANCNFSM6AAAAAAT34ET6M . You are receiving this because you authored the thread.Message ID: @.***>

nfcampos avatar Jan 15 '23 20:01 nfcampos

I just got this exact error and seems like an isolated case. I noticed it was taking long to get the response, like 2min, and eventually it raised that exception. This is the python code I used with v0.26.1:

response = openai.Image.create(
  prompt='a kitten splashing on an aquarium full of candy´,
  n=10,
  size="1024x1024",
  response_format='b64_json'
)

ricardopinto avatar Jan 21 '23 19:01 ricardopinto

I'm seeing this issue in PROD with bulk async requests @athyuttamre . I validated the JSON and tried a number of other ways to avoid it but still get " Error: Unterminated string starting at: line 9 column 18 (char 225)."

my async retry function is:

    initial_delay: float = random.random(),
    exponential_base: float = 2,
    jitter: bool = True,
    max_retries: int = 10,
    errors: tuple = (openai.error.RateLimitError, openai.error.ServiceUnavailableError, json.JSONDecodeError,openai.error.APIError),
):
    """Retry a function with exponential backoff."""
    def decorator(func):
        async def wrapper(*args, **kwargs):
            # Initialize variables
            num_retries = 0
            initial_delay = random.random()
            delay = initial_delay
            
            # Loop until a successful response or max_retries is hit or an exception is raised
            while True:
                try:
                    await asyncio.sleep(initial_delay * 0.10)
                    response = await func(*args, **kwargs)
                    response = response["choices"][0]["message"]["content"]
                    return json.loads(response)

                # Retry on specified errors
                except errors as err_msg:
                    # Increment retries
                    num_retries += 1
                    print(f"Recalling OpenAI API #{num_retries}. Error: {err_msg}. iDelay: {initial_delay:.3f}. Delay: {delay:.3f}\n\n{str(args)}\n\n{str(kwargs)}" )
                    # Check if max retries has been reached
                    if num_retries > max_retries:
                        raise Exception(
                            f"Maximum number of retries ({max_retries}) exceeded."
                        )

                    # Increment the delay
                    delay *= exponential_base * (1 + jitter * random.random())

                    # Sleep for the delay
                    await asyncio.sleep(delay)

                # Raise exceptions for any errors not specified
                except Exception as err_msg:
                    raise err_msg
        return wrapper
    return decorator

@aretry_json_with_exponential_backoff()
async def achats_json_with_backoff(**kwargs):
    return await openai.ChatCompletion.acreate(**kwargs)```

cenedella avatar May 12 '23 23:05 cenedella

any thoughts @hallacy @ellisonbg @athyuttamre ? recurring error using openai 0.27.6

cenedella avatar May 29 '23 15:05 cenedella

Im having the same issue as well with Azure OpenAI. It's a case by case for very specific payloads.

MohammedFadin avatar Jul 18 '23 19:07 MohammedFadin