added support for direct streaming of llm output into state
Following up on #971 about streaming LLM responses without function calls:
I've added support for direct LLM output streaming. Instead of forcing the LLM to use function calls (which was causing inconsistent behavior), you can now stream the output directly to any state variable using the "direct_output" boolean like this:
config = copilotkit_customize_config(
config,
emit_messages=False,
emit_intermediate_state=[{"state_key": "content", "direct_output": True}],
)
Set direct_output: True and emit_messages: False and the LLM's response will stream straight to your specified state key.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| docs | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Nov 25, 2024 9:40pm |
@yuvalkarmi is attempting to deploy a commit to the CopilotKit Team on Vercel.
A member of the Team first needs to authorize it.
@mme or @ranst91, can you please review?
@yuvalkarmi, just following up here. Did you see the requested changes?
Hey all, closing this PR since it is really out of date. Please re-open should any attention return here.