chat-langchain
chat-langchain copied to clipboard
ERROR:ChatVectorDBChain does not support async
langchain==0.0.132 Python 3.10.9 pip pip 23.0.1 from /opt/homebrew/lib/python3.9/site-packages/pip (python 3.9)
Similar, here is what I got. But I am pretty clueless on how to handle this.
langchain/chains/conversational_retrieval/base.py:191: UserWarning: ChatVectorDBChain is deprecated - please use from langchain.chains import ConversationalRetrievalChain
warnings.warn(
INFO: connection open
ERROR:root:ChatVectorDBChain does not support async
langchain==0.0.133
seems to duplicate https://github.com/hwchase17/chat-langchain/issues/37
let us if someone solved this error
the other issue https://github.com/hwchase17/chat-langchain/issues/37 has a workaround: go back in version.
the other issue #37 has a workaround: go back in version.
previous versions don't have LlamaCpp support :'(
I found this solution: in query_data.py I edited the get_chain function to include ConversationalRetrievalChain instead of ChatVectorDBChain, it seems to work with LangChain==v0.0.139
def get_chain(
vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False
) -> ConversationalRetrievalChain: # <== CHANGE THE TYPE
"""Create a ChatVectorDBChain for question/answering."""
# Construct a ChatVectorDBChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
manager = AsyncCallbackManager([])
question_manager = AsyncCallbackManager([question_handler])
stream_manager = AsyncCallbackManager([stream_handler])
if tracing:
tracer = LangChainTracer()
tracer.load_default_session()
manager.add_handler(tracer)
question_manager.add_handler(tracer)
stream_manager.add_handler(tracer)
question_gen_llm = OpenAI(
temperature=0,
verbose=True,
callback_manager=question_manager,
)
streaming_llm = OpenAI(
streaming=True,
callback_manager=stream_manager,
verbose=True,
temperature=0,
)
question_generator = LLMChain(
llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
)
doc_chain = load_qa_chain(
streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager
)
qa = ConversationalRetrievalChain( # <==CHANGE ConversationalRetrievalChain instead of ChatVectorDBChain
# vectorstore=vectorstore, # <== REMOVE THIS
retriever=vectorstore.as_retriever(), # <== ADD THIS
combine_docs_chain=doc_chain,
question_generator=question_generator,
callback_manager=manager,
)
return qa
Works fine for me Thanks
@efraintorlo I turned this into a PR. Hopefully, it will be merged soon.
the other issue #37 has a workaround: go back in version.
@pve
Hi, you are here as well! Small world :)
What are you working on? E-mail me!
@efraintorlo I turned this into a PR. Hopefully, it will be merged soon.
Thanks @pors
hit the same error langchain==0.0.163 python 3.11 Error: NotImplementedError('ChatVectorDBChain does not support async') found in the trace.
thinks @efraintorlo and @pors ,it works for me. i fix it and my web page work well. by the way, i think it is just because the high version of langchain, may be it is not a bug
I found this solution: in
query_data.pyI edited theget_chainfunction to includeConversationalRetrievalChaininstead ofChatVectorDBChain, it seems to work withLangChain==v0.0.139def get_chain( vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False ) -> ConversationalRetrievalChain: # <== CHANGE THE TYPE """Create a ChatVectorDBChain for question/answering.""" # Construct a ChatVectorDBChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation manager = AsyncCallbackManager([]) question_manager = AsyncCallbackManager([question_handler]) stream_manager = AsyncCallbackManager([stream_handler]) if tracing: tracer = LangChainTracer() tracer.load_default_session() manager.add_handler(tracer) question_manager.add_handler(tracer) stream_manager.add_handler(tracer) question_gen_llm = OpenAI( temperature=0, verbose=True, callback_manager=question_manager, ) streaming_llm = OpenAI( streaming=True, callback_manager=stream_manager, verbose=True, temperature=0, ) question_generator = LLMChain( llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager ) doc_chain = load_qa_chain( streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager ) qa = ConversationalRetrievalChain( # <==CHANGE ConversationalRetrievalChain instead of ChatVectorDBChain # vectorstore=vectorstore, # <== REMOVE THIS retriever=vectorstore.as_retriever(), # <== ADD THIS combine_docs_chain=doc_chain, question_generator=question_generator, callback_manager=manager, ) return qa
牛逼,试着ok
from langchain.chains import ConversationalRetrievalChain
Amazing! It works~
I found this solution: in
query_data.pyI edited theget_chainfunction to includeConversationalRetrievalChaininstead ofChatVectorDBChain, it seems to work withLangChain==v0.0.139def get_chain( vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False ) -> ConversationalRetrievalChain: # <== CHANGE THE TYPE """Create a ChatVectorDBChain for question/answering.""" # Construct a ChatVectorDBChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation manager = AsyncCallbackManager([]) question_manager = AsyncCallbackManager([question_handler]) stream_manager = AsyncCallbackManager([stream_handler]) if tracing: tracer = LangChainTracer() tracer.load_default_session() manager.add_handler(tracer) question_manager.add_handler(tracer) stream_manager.add_handler(tracer) question_gen_llm = OpenAI( temperature=0, verbose=True, callback_manager=question_manager, ) streaming_llm = OpenAI( streaming=True, callback_manager=stream_manager, verbose=True, temperature=0, ) question_generator = LLMChain( llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager ) doc_chain = load_qa_chain( streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager ) qa = ConversationalRetrievalChain( # <==CHANGE ConversationalRetrievalChain instead of ChatVectorDBChain # vectorstore=vectorstore, # <== REMOVE THIS retriever=vectorstore.as_retriever(), # <== ADD THIS combine_docs_chain=doc_chain, question_generator=question_generator, callback_manager=manager, ) return qa
It solved my problem!