docs
docs copied to clipboard
Example on using LlamaIndex stream_chat()
The Streamlit docs on creating a streaming chatbot show the following example:
for response in client.chat.completions.create(
model=st.session_state["openai_model"],
messages=[{"role": m["role"], "content": m["content"]} for m in st.session_state.messages],
stream=True,
):
...but there is no example (that I can find) of creating a streaming chat engine from an index object as shown in the LlamaIndex examples:
chat_engine = index.as_chat_engine()
streaming_response = chat_engine.stream_chat("Tell me a joke.")
for token in streaming_response.response_gen:
print(token, end="")
If I try to use chat_engine.stream_chat
with the for response in client.chat.completions.create()
pattern as shown in the Streamlit docs, I get the following error RuntimeError: There is no current event loop in thread 'ScriptRunner.scriptThread'
.
An example traceback when trying to simply use:
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for token in st.session_state.chat_engine.stream_chat(prompt):
print(token)
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})
As in the Streamlit docs on creating a streaming chatbot.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/workspaces/gcp_llm/app.py", line 67, in <module>
for token in st.session_state.chat_engine.stream_chat(prompt):
File "/usr/local/lib/python3.9/site-packages/llama_index/callbacks/utils.py", line 39, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/llama_index/agent/openai_agent.py", line 444, in stream_chat
chat_response = self._chat(
File "/usr/local/lib/python3.9/site-packages/llama_index/agent/openai_agent.py", line 330, in _chat
agent_chat_response = self._get_agent_response(mode=mode, **llm_chat_kwargs)
File "/usr/local/lib/python3.9/site-packages/llama_index/agent/openai_agent.py", line 295, in _get_agent_response
return self._get_stream_ai_response(**llm_chat_kwargs)
File "/usr/local/lib/python3.9/site-packages/llama_index/agent/openai_agent.py", line 196, in _get_stream_ai_response
chat_stream_response = StreamingAgentChatResponse(
File "<string>", line 10, in __init__
File "/usr/local/lib/python3.9/asyncio/queues.py", line 36, in __init__
self._loop = events.get_event_loop()
File "/usr/local/lib/python3.9/asyncio/events.py", line 642, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'ScriptRunner.scriptThread'.