openai-node icon indicating copy to clipboard operation
openai-node copied to clipboard

Getting `Unhandled Rejection` event, randomly.

Open mManishTrivedi opened this issue 1 year ago • 7 comments

Confirm this is a Node library issue and not an underlying OpenAI API issue

  • [X] This is an issue with the Node library

Describe the bug

A Node.js library is throwing an unhandled promise rejection error due to improper error handling.

To Reproduce

How to reproduce this issue:

  • run and stream, then add an event listener to the stream.
  • Attempt to run and stream again on the same thread ID. This will trigger an unhandled rejection, causing your Node server to crash if there is no global error handler in place.

Ideally, this error should be handled with stream.on('error', myCustomErrorHandler).

Code snippets

No response

OS

macOs

Node version

v18.19.0

Library version

openai 4.52.7

mManishTrivedi avatar Jul 29 '24 07:07 mManishTrivedi

Hi can you share an example script? (it doesn't have to reliably reproduce this issue as you've mentioned you're only seeing it sporadically)

RobertCraigie avatar Jul 29 '24 07:07 RobertCraigie

@RobertCraigie I haven't observed this issue in the last day. I have updated the OpenAI module. Let's monitor it for the next 1-2 days. If the issue persists, I will share a code snippet for reproduction.

mManishTrivedi avatar Jul 31 '24 05:07 mManishTrivedi

I encountered this error randomly after some retries on the same thread.

For example, if I post a message on a thread and receive the error message Final run has not been received, I retry sequentially 2 times and then stop resending.

Meanwhile, I receive this error message in my Global error handler with the same thread ID.

Unhandled rejection Trigger {"reason":{"status":400,"headers":{"alt-svc":"h3=\":443\"; ma=86400","cf-cache-status":"DYNAMIC","cf-ray":"8af38853fc9b641c-SJC","connection":"keep-alive","content-length":"217","content-type":"application/json","date":"Wed, 07 Aug 2024 01:35:53 GMT","openai-organization":"xtravision-ai-0qr74n","openai-processing-ms":"56","openai-version":"2020-10-01","server":"cloudflare","set-cookie":"__cf_bm=xSKJB9.KRWcvNuy28tdPNk6LZ01CinOtA; path=/; expires=Wed, 07-Aug-24 02:05:53 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=I4zFotD6LF5GbCGqaa3I1AifEhf586o-1722994553071-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None","strict-transport-security":"max-age=15552000; includeSubDomains; preload","x-content-type-options":"nosniff","x-request-id":"req_575ca705d2c2d5aa7d2f9e2ef775241e"},"request_id":"req_575ca705d2c2d5aa7d2f9e2ef775241e","error":{"message":"Can't add messages to thread_jfNWRMg0hojEMYDGnQLu6LwO while a run run_dwVhF3Zzwcu9v6KinLjwrc3J is active.","type":"invalid_request_error","param":null,"code":null},"code":null,"param":null,"type":"invalid_request_error"},"promise":{},"isNotifyToAdmin":true,"stack":"Error: Unhandled rejection Trigger\n at process.<anonymous> (/app/build/systemProcessEvent.js:19:49)\n at process.emit (node:events:517:28)\n at process.emit (node:domain:489:12)\n at emit (node:internal/process/promises:149:20)\n at processPromiseRejections (node:internal/process/promises:283:27)\n at process.processTicksAndRejections (node:internal/process/task_queues:96:32)"}

mManishTrivedi avatar Aug 07 '24 07:08 mManishTrivedi

This is happening to us as well. We've had to stop using OpenAI streams altogether because of the process crashes.

llegoelkelo avatar Sep 28 '24 21:09 llegoelkelo

I'm running into this while trying to integrate with the Assistants API. I seem to be able to replicate this here and there. I have a server with API endpoint that submits tool outputs for a run and streams the response using the SDK. The user is able to cancel a run out-of-band via a separate api endpoint. This seems to frequently lead to this error occurring.

Here's some sample code that deals with runs which require action. It should be fairly easy to adapt to an express server or a next.js api.

It may be important to note that in my implementation that I currently have the client also terminating the open connection with the server after they successfully cancel the run while it is still streaming. This triggers the close event on the response and also therefore calls abort on the AbortController instance.

async function handler(request, response) {
    const threadId = request.body.threadId
    const runId = params.body.runId

    const controller = new AbortController()
    const signal = controller.signal

    response.on('close', () => {
        controller.abort()
    })

    const run = await openai.beta.threads.runs.retrieve(threadId, runId, { signal })
    const calls = run.required_action.submit_tool_outputs.tool_calls
    const outputs = await Promise.all(
        calls.map(async (call) => {
            const output = await handleFunctionCall(run, call, { signal })
            return { output, tool_call_id: call.id }
        }),
    )

    const stream = openai.beta.threads.runs.submitToolOutputsStream(
        threadId,
        runId,
        { tool_outputs: outputs },
        { signal },
    )

    // Simulate the user canceling the run out-of-band.
    // Adjust the delay as needed.
    setTimeout(async () => {
        await openai.beta.threads.runs.cancel(threadId, runId)
    }, 100)

    stream.on('error', (error) => {
        /**
         * E.g,
         *
         *  OpenAIError: Final run has not been received
         *  at AssistantStream._EventStream_handleError (/app/node_modules/openai/src/lib/EventStream.ts:159:40)
         *  at processTicksAndRejections (node:internal/process/task_queues:105:5) {
         *  cause: Error: Final run has not been received
         *      at AssistantStream._AssistantStream_endRequest (/app/node_modules/openai/src/lib/AssistantStream.ts:411:32)
         *      at AssistantStream._createToolAssistantStream (/app/node_modules/openai/src/lib/AssistantStream.ts:234:41)
         *      at processTicksAndRejections (node:internal/process/task_queues:105:5)
         *      at AssistantStream._runToolAssistantStream (/app/node_modules/openai/src/lib/AssistantStream.ts:772:12)
         */
        console.error(error)
    })

    return new Response(stream.toReadableStream())
}

async function handleFunctionCall(run, call, { signal }) {
    // Simulate a task that takes some time to complete
    await new Promise((resolve) => setTimeout(resolve, 1000))
    return JSON.stringify({ result: 'success' })
}

You may also find this comment helpful.

MarkMurphy avatar Feb 19 '25 03:02 MarkMurphy

Also, #945 may be related

MarkMurphy avatar Feb 19 '25 03:02 MarkMurphy

This is incredibly frustrating and has necessaitated a global unhandledRejection fix for it in our code which I am currently testing - which is at least solvable because I can tie the run/stream/thread back together and handle it in theory from the error message, but installing that now there is another unhandled rejection I need to investigate, grrr....

Error: read ETIMEDOUT
    at AssistantStream._EventStream_handleError (file:///Users/myname/myprojectname/node_modules/openai/lib/EventStream.mjs:189:29

which may turn out to be much less solvable

(actually hopefully it will be - seen this comment in the source looking into it:

        // Trigger an unhandled rejection if the user hasn't registered any error handlers.
        // If you are seeing stack traces here, make sure to handle errors via either:
        // - runner.on('error', () => ...)
        // - await runner.done()
        // - await runner.finalChatCompletion()
        // - etc.

)

unhandled rejections should be lintable!

suspiciousfellow avatar Mar 04 '25 19:03 suspiciousfellow