Martin Votruba
Martin Votruba
I do not completely understand what are you trying to solve, so I can not give you an answer, but check out [Steercode](https://steercode.com/), which can answer you questions about LangChain...
**🤖 AI-generated response by [Steercode](https://steercode.com/) - chat with Langchain codebase** _Disclaimer: SteerCode Chat may provide inaccurate information about the Langchain codebase._ To set up a streaming response (Server-Sent Events, or...
@KevinZhang19870314 Have you try [Steercode](https://steercode.com/) to answer your question? :) I have pasted you question and it suggests to change the implementation of `_get_ask_answer` to async. Here is the code...
you probably need to make the askAI method async and change the run to arun and await it. Also you will need to await the askAI in the for loop...
**🤖 AI-generated response by [Steercode](https://steercode.com/) - chat with Langchain codebase** _Disclaimer: SteerCode Chat may provide inaccurate information about the Langchain codebase._ If you want to create an overall summary of...
**🤖 AI-generated response by [Steercode](https://steercode.com/) - chat with Langchain codebase** _Disclaimer: SteerCode Chat may provide inaccurate information about the Langchain codebase._ To add a custom prompt to ConversationalRetrievalChain, you can...
**🤖 AI-generated response by [Steercode](https://steercode.com/) - chat with Langchain codebase** _Disclaimer: SteerCode Chat may provide inaccurate information about the Langchain codebase._ To remove the escape sequences for coloring in the...
I think [Steercode](https://steercode.com/) actually provides the correct answer in this case. Here is the answer it has generated: ---- It seems that the issue you're facing is that the LLM...
You are right, you have to specify the k when you are constructing the retriever (so actually not as chain_type_kwargs).
Yes, I am one of the creators :) I am glad you like it!