ax icon indicating copy to clipboard operation
ax copied to clipboard

Accessing Stream Chunks (Streamed generation)

Open backslash112 opened this issue 1 year ago • 11 comments

  • I'm submitting a ... [x] question about how to use this project

  • Summary I'm encountering two problems when working with the streaming example:

  1. When running the code from examples/streaming2.ts with stream: true, I get an error: Missing required fields: answerInPoints. What's causing this error and how can I resolve it?
  2. After setting stream: true, how can I access the result chunks? Are there methods similar to for await (const chunk of result) or completion.data.on() that I can use to process the incoming stream? (Similar to https://github.com/openai/openai-node/issues/18)

Any guidance on resolving these issues would be greatly appreciated. Thank you!

backslash112 avatar Jun 29 '24 17:06 backslash112

sorry was planning to fix this earlier was in the middle of our big migration to a monorepo. looking into this now.

dosco avatar Jul 03 '24 23:07 dosco

fix in latest release

dosco avatar Jul 04 '24 07:07 dosco

I would like to reopen this issue. Number two, i dont understand on how to do it with .chat.

I can see that it is suppose to return a Readable Stream, and i have set the stream to true. But i cannot get it to work.

Any example or ideas @dosco ?

taieb-tk avatar Aug 18 '24 17:08 taieb-tk

@taieb-tk have you looked at the streaming1.ts and streaming2.ts examples? stream: true enables streaming with the underlying llm provider to speed up things the final fields are not streamed out.

dosco avatar Aug 26 '24 07:08 dosco

@dosco Yes i did, i could not get it to work, probably a skill issue from my side. I tried to just use the

`const ai = new ax.AxAIOpenAI({ apiKey: apiKey as string, });

    ai.setOptions({ debug: true })


    const response = await ai.chat({
        chatPrompt: conversationHistory,

        config: {
            stream: true
        },
        ...(tools?.length && { functions: normalizeFunctions(tools) }),
    });

`

Not sure what to do with the response in the next step... Could you possibly help me? :)

taieb-tk avatar Sep 01 '24 16:09 taieb-tk

Bump any help would be appriciated :)

taieb-tk avatar Oct 13 '24 16:10 taieb-tk

The bug is in the line below which is wrong. Also typescript should catch this it's even in the api docs. https://axllm.dev/apidocs/classes/axai/

 config: {
            stream: true
        },

It should be

ai.setOptions({ debug: true })


const response = await ai.chat({
    chatPrompt: conversationHistory,
    ...(tools?.length && { functions: normalizeFunctions(tools) }),
}, {
   stream: true
});

dosco avatar Oct 14 '24 17:10 dosco

The response is suppose to be asyncgenerator? Any examples on how to catch that stream?

taieb-tk avatar Oct 14 '24 19:10 taieb-tk

Ahh ok! I read a bit fast, i will try that! thanks!! :)

taieb-tk avatar Oct 14 '24 19:10 taieb-tk

use an async for loop like in the streaming examples

Vikram Rangnekar

On Mon, Oct 14, 2024 at 12:56 PM taieb-tk @.***> wrote:

The response is suppose to be asyncgenerator? Any examples on how to catch that stream?

— Reply to this email directly, view it on GitHub https://github.com/ax-llm/ax/issues/36#issuecomment-2412094721, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGLF277TMKA4E4RFB64GRDZ3QOWRAVCNFSM6AAAAABKDK5QRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJSGA4TINZSGE . You are receiving this because you were mentioned.Message ID: @.***>

dosco avatar Oct 14 '24 20:10 dosco

Your answer above solved my problem! Really appriciate the help. Looking forward to continue testing this 👍

taieb-tk avatar Oct 14 '24 20:10 taieb-tk