my3vm

Results 9 comments of my3vm
trafficstars

I am facing similar situation with off topic conversations as described [issue](https://github.com/hwchase17/langchain/issues/3963) When I printed the value of similarity scores from the similarity_search revealed that the similarity always ranges between...

Is this merged with Langchain 0.0.170 yet? I am still getting errors for **search_type='similarity_score_threshold'**

Setting the device explicitly solved my issues too! def get_device_map() -> str: return 'cuda' if torch.cuda.is_available() else 'cpu' device = get_device_map()

I too had similar issue, thanks for this response!

I guess one could just use default **QA_PROMPT** in case one has no requirements for prompt customisation. ` from langchain.chains.conversational_retrieval.prompts import QA_PROMPT memory = ConversationSummaryMemory( llm = OpenAI(model_name='gpt-3.5-turbo'), memory_key='chat_history', return_messages=True,...

> Response: `I'm sorry, you did not ask me a question. Is there anything I can help you with?` Have you tried providing this **get_chat_history** ?

What i figured is that the key qa_prompt to the ConversationalRetrievalChain was working with older versions of LangChain 0.0.155 or earlier. However, the same has been removed in the later...

Have been facing the same challenge! Eagerly waiting to take benefit from this PR. Hoping they'd soon merge with master.

I am interested in this feature. Any specific reason this PR is not pursued?