ax
ax copied to clipboard
Accessing Stream Chunks (Streamed generation)
-
I'm submitting a ... [x] question about how to use this project
-
Summary I'm encountering two problems when working with the streaming example:
- When running the code from
examples/streaming2.tswithstream: true, I get an error:Missing required fields: answerInPoints. What's causing this error and how can I resolve it? - After setting
stream: true, how can I access the result chunks? Are there methods similar tofor await (const chunk of result)orcompletion.data.on()that I can use to process the incoming stream? (Similar to https://github.com/openai/openai-node/issues/18)
Any guidance on resolving these issues would be greatly appreciated. Thank you!
sorry was planning to fix this earlier was in the middle of our big migration to a monorepo. looking into this now.
fix in latest release
I would like to reopen this issue. Number two, i dont understand on how to do it with .chat.
I can see that it is suppose to return a Readable Stream, and i have set the stream to true. But i cannot get it to work.
Any example or ideas @dosco ?
@taieb-tk have you looked at the streaming1.ts and streaming2.ts examples? stream: true enables streaming with the underlying llm provider to speed up things the final fields are not streamed out.
@dosco Yes i did, i could not get it to work, probably a skill issue from my side. I tried to just use the
`const ai = new ax.AxAIOpenAI({ apiKey: apiKey as string, });
ai.setOptions({ debug: true })
const response = await ai.chat({
chatPrompt: conversationHistory,
config: {
stream: true
},
...(tools?.length && { functions: normalizeFunctions(tools) }),
});
`
Not sure what to do with the response in the next step... Could you possibly help me? :)
Bump any help would be appriciated :)
The bug is in the line below which is wrong. Also typescript should catch this it's even in the api docs. https://axllm.dev/apidocs/classes/axai/
config: {
stream: true
},
It should be
ai.setOptions({ debug: true })
const response = await ai.chat({
chatPrompt: conversationHistory,
...(tools?.length && { functions: normalizeFunctions(tools) }),
}, {
stream: true
});
The response is suppose to be asyncgenerator? Any examples on how to catch that stream?
Ahh ok! I read a bit fast, i will try that! thanks!! :)
use an async for loop like in the streaming examples
Vikram Rangnekar
On Mon, Oct 14, 2024 at 12:56 PM taieb-tk @.***> wrote:
The response is suppose to be asyncgenerator? Any examples on how to catch that stream?
— Reply to this email directly, view it on GitHub https://github.com/ax-llm/ax/issues/36#issuecomment-2412094721, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGLF277TMKA4E4RFB64GRDZ3QOWRAVCNFSM6AAAAABKDK5QRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJSGA4TINZSGE . You are receiving this because you were mentioned.Message ID: @.***>
Your answer above solved my problem! Really appriciate the help. Looking forward to continue testing this 👍