tomatefarcie123
tomatefarcie123
I have the same problem with that callback and others. Would love to hear how to solve this.
Not sure but I think it's when the LLM is called. It appears that the Handler object is being passed to the OpenAI API for some reason.
Sorry if I'm hogging @mrcaipeng 's ticket. In my case it appears in the debug console of a Flask app. I'm trying to stream to a web page.
Did you update to the latest version of langchain?
I placed my (custom) callback handler in the llm declaration, upgraded langchain and that finally got rid of the problem: ``` llm=ChatOpenAI( openai_api_key=env_variables["OPENAI_API_KEY"], streaming=True, temperature=0, model_name='gpt-3.5-turbo', max_tokens=256, callbacks=[handler] ) ```
Hello @jpdus thanks for the examples above. Have you tried running this in async? For me it just hangs. I'm trying to run it with: `results = await qa_chain.acall(inputs=data)` I...
Thanks for your reply. That streaming worked in sync mode with the `RetrievalsQAWithSourcesChain` but as you suggest I might turn it off for now until I get the rest of...