AdalFlow icon indicating copy to clipboard operation
AdalFlow copied to clipboard

Can we stream responses?

Open mneedham opened this issue 7 months ago • 11 comments

Describe the bug

Not sure if this is a bug or if it's not supposed to work this way, but I can't figure out how to stream the response from the LLM.

To Reproduce

from lightrag.core.generator import Generator
from lightrag.components.model_client import OllamaClient

model_client = OllamaClient()
model_kwargs = {"model": "phi3", "stream": True}
generator = Generator(model_client=model_client, model_kwargs=model_kwargs)
generator({"input_str": "What is the capital of France?"})

Returns:

GeneratorOutput(
    data=None,
    error='Error parsing the completion: <generator object Client._stream at 0x11e388480>',
    usage=None,
    raw_response='<generator object Client._stream at 0x11e388480>',
    metadata=None
)

Expected behavior

I want to be able to iterate over the response and render it as it's produced.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Mac OS Sonoma 14.5

mneedham avatar Jul 23 '24 08:07 mneedham