kernel-memory
kernel-memory copied to clipboard
Streaming AskAsync response
Hi,
We're using your library in our project. It get us up to speed preety fast in these new AI topics, so thank you for that :).
I was wondering if there is an option to recieve streaming response from our semantic memory service. This could improve user experience, to recieve some first tokens of response instead of waiting for whole thing. OpenAI chatgpt is working this way.
BR, Dawid