chat-langchain icon indicating copy to clipboard operation
chat-langchain copied to clipboard

ERROR:ChatVectorDBChain does not support async

Open GZ315200 opened this issue 2 years ago • 15 comments

langchain==0.0.132 Python 3.10.9 pip pip 23.0.1 from /opt/homebrew/lib/python3.9/site-packages/pip (python 3.9)

GZ315200 avatar Apr 06 '23 09:04 GZ315200

Similar, here is what I got. But I am pretty clueless on how to handle this.

langchain/chains/conversational_retrieval/base.py:191: UserWarning: ChatVectorDBChain is deprecated - please use from langchain.chains import ConversationalRetrievalChain warnings.warn( INFO: connection open ERROR:root:ChatVectorDBChain does not support async

langchain==0.0.133

pve avatar Apr 06 '23 19:04 pve

seems to duplicate https://github.com/hwchase17/chat-langchain/issues/37

pve avatar Apr 06 '23 20:04 pve

let us if someone solved this error

narendraadloid avatar Apr 10 '23 18:04 narendraadloid

the other issue https://github.com/hwchase17/chat-langchain/issues/37 has a workaround: go back in version.

pve avatar Apr 10 '23 18:04 pve

the other issue #37 has a workaround: go back in version.

previous versions don't have LlamaCpp support :'(

Qualzz avatar Apr 11 '23 18:04 Qualzz

I found this solution: in query_data.py I edited the get_chain function to include ConversationalRetrievalChain instead of ChatVectorDBChain, it seems to work with LangChain==v0.0.139

def get_chain(
    vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False
) -> ConversationalRetrievalChain:  #  <== CHANGE THE TYPE
    """Create a ChatVectorDBChain for question/answering."""
    # Construct a ChatVectorDBChain with a streaming llm for combine docs
    # and a separate, non-streaming llm for question generation
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])
    if tracing:
        tracer = LangChainTracer()
        tracer.load_default_session()
        manager.add_handler(tracer)
        question_manager.add_handler(tracer)
        stream_manager.add_handler(tracer)

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
    )
    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager
    )

    qa = ConversationalRetrievalChain(         # <==CHANGE  ConversationalRetrievalChain instead of ChatVectorDBChain
        # vectorstore=vectorstore,             # <== REMOVE THIS
        retriever=vectorstore.as_retriever(),  # <== ADD THIS
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    return qa

efraintorlo avatar Apr 14 '23 09:04 efraintorlo

Works fine for me Thanks

giraudremi92 avatar Apr 20 '23 18:04 giraudremi92

@efraintorlo I turned this into a PR. Hopefully, it will be merged soon.

pors avatar Apr 28 '23 08:04 pors

the other issue #37 has a workaround: go back in version.

@pve

Hi, you are here as well! Small world :)

What are you working on? E-mail me!

pors avatar Apr 28 '23 08:04 pors

@efraintorlo I turned this into a PR. Hopefully, it will be merged soon.

Thanks @pors

efraintorlo avatar Apr 30 '23 19:04 efraintorlo

hit the same error langchain==0.0.163 python 3.11 Error: NotImplementedError('ChatVectorDBChain does not support async') found in the trace.

jqian2-sc avatar May 10 '23 17:05 jqian2-sc

thinks @efraintorlo and @pors ,it works for me. i fix it and my web page work well. by the way, i think it is just because the high version of langchain, may be it is not a bug

sk77github avatar May 29 '23 14:05 sk77github

I found this solution: in query_data.py I edited the get_chain function to include ConversationalRetrievalChain instead of ChatVectorDBChain, it seems to work with LangChain==v0.0.139

def get_chain(
    vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False
) -> ConversationalRetrievalChain:  #  <== CHANGE THE TYPE
    """Create a ChatVectorDBChain for question/answering."""
    # Construct a ChatVectorDBChain with a streaming llm for combine docs
    # and a separate, non-streaming llm for question generation
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])
    if tracing:
        tracer = LangChainTracer()
        tracer.load_default_session()
        manager.add_handler(tracer)
        question_manager.add_handler(tracer)
        stream_manager.add_handler(tracer)

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
    )
    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager
    )

    qa = ConversationalRetrievalChain(         # <==CHANGE  ConversationalRetrievalChain instead of ChatVectorDBChain
        # vectorstore=vectorstore,             # <== REMOVE THIS
        retriever=vectorstore.as_retriever(),  # <== ADD THIS
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    return qa

牛逼,试着ok

zxb167 avatar Jun 15 '23 12:06 zxb167

from langchain.chains import ConversationalRetrievalChain

Amazing! It works~

leonardofang avatar Jun 22 '23 07:06 leonardofang

I found this solution: in query_data.py I edited the get_chain function to include ConversationalRetrievalChain instead of ChatVectorDBChain, it seems to work with LangChain==v0.0.139

def get_chain(
    vectorstore: VectorStore, question_handler, stream_handler, tracing: bool = False
) -> ConversationalRetrievalChain:  #  <== CHANGE THE TYPE
    """Create a ChatVectorDBChain for question/answering."""
    # Construct a ChatVectorDBChain with a streaming llm for combine docs
    # and a separate, non-streaming llm for question generation
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])
    if tracing:
        tracer = LangChainTracer()
        tracer.load_default_session()
        manager.add_handler(tracer)
        question_manager.add_handler(tracer)
        stream_manager.add_handler(tracer)

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
    )
    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager
    )

    qa = ConversationalRetrievalChain(         # <==CHANGE  ConversationalRetrievalChain instead of ChatVectorDBChain
        # vectorstore=vectorstore,             # <== REMOVE THIS
        retriever=vectorstore.as_retriever(),  # <== ADD THIS
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    return qa

It solved my problem!

leonardofang avatar Jun 22 '23 07:06 leonardofang