langchain
langchain copied to clipboard
Error `can only concatenate str (not "tuple") to str` when using `ConversationBufferWindowMemory`
I'm facing a weird issue with the ConversationBufferWindowMemory
Running memory.load_memory_variables({})
prints:
{'chat_history': [HumanMessage(content='Hi my name is Ismail', additional_kwargs={}), AIMessage(content='Hello Ismail! How can I assist you today?', additional_kwargs={})]}
The error I get after sending a second message to the chain is:
> Entering new ConversationalRetrievalChain chain...
[2023-04-18 10:34:52,512] ERROR in app: Exception on /api/v1/chat [POST]
Traceback (most recent call last):
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/homanp/Projects/ad-gpt/app.py", line 46, in chat
result = chain({"question": message, "chat_history": []})
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 71, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 25, in _get_chat_history
human = "Human: " + human_s
TypeError: can only concatenate str (not "tuple") to str
Current implementaion:
memory = ConversationBufferWindowMemory(memory_key='chat_history', k=2, return_messages=True)
chain = ConversationalRetrievalChain.from_llm(model,
memory=memory,
verbose=True,
retriever=retriever,
qa_prompt=QA_PROMPT,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,)
Hi please assign this to me i will solve it
Hi please assign this to me i will solve it
I don't have privileges to assign issues.
Ok, Thanks ,homanp
so _get_chat_history
 want's a string but the memory is in a tuple
. This should be handled without the user having to pass their own get_chat_history
I managed to fix this using
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
Got it ,Thanks ,looking forward to learn more by doing ,Ismail Pelaseyed
is this resolved ? I'm facing the same thing and i didn't know how to use the workaround @homanp
is this resolved ? I'm facing the same thing and i didn't know how to use the workaround @homanp
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html?highlight=get_chat_history#get-chat-history-function
@tarek-kerbedj
When will this be fixed? I am facing the same issue
Me too. Looking forward to the fix or any guidance
Hi, @homanp! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue you reported is related to an error that occurs when using ConversationBufferWindowMemory
and trying to concatenate a tuple to a string. You mentioned that you were able to fix this by modifying the get_chat_history
function. Other users like "kanukolluGVT", "tarek-kerbedj", "ambikaiyer29", and "waiyong" are also facing the same issue and are looking for a fix or guidance.
The good news is that you have already provided a solution by modifying the get_chat_history
function. If you believe this issue is still relevant to the latest version of the LangChain repository, please let the LangChain team know by commenting on this issue. Otherwise, feel free to close the issue yourself. If no further action is taken, the issue will be automatically closed in 7 days.
Thank you for your contribution and for helping us improve LangChain! Let me know if you have any questions or need further assistance.