Langchain-Chatchat
Langchain-Chatchat copied to clipboard
如何构建prompt和history 能够让知识库检索回答按照选择题返回
问题描述 / Problem Description 用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner.
复现问题的步骤 / Steps to Reproduce
- 执行 '...' / Run '...'
- 点击 '...' / Click '...'
- 滚动到 '...' / Scroll to '...'
- 问题出现 / Problem occurs
预期的结果 / Expected Result 描述应该出现的结果 / Describe the expected result.
实际结果 / Actual Result 描述实际发生的结果 / Describe the actual result.
环境信息 / Environment Information
- langchain-ChatGLM 版本/commit 号:(例如:v2.0.1 或 commit 123456) / langchain-ChatGLM version/commit number: (e.g., v2.0.1 or commit 123456)
- 是否使用 Docker 部署(是/否):否 / Is Docker deployment used (yes/no): no
- 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM-6B / Model used (ChatGLM2-6B / Qwen-7B, etc.): Qwen7B-Chat
- 使用的 Embedding 模型(moka-ai/m3e-base 等):moka-ai/m3e-base / Embedding model used (moka-ai/m3e-base, etc.): bge-m3
- 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss / Vector library used (faiss, milvus, pg_vector, etc.): faiss
- 操作系统及版本 / Operating system and version:
- Python 版本 / Python version:3.11
- 其他相关环境信息 / Other relevant environment information:
附加信息 / Additional Information
在使用知识库检索生成对话时,我希望使用模型生成单选题答案,且只生成答案,不做解释
我编辑了历史对话 为其加入两个标准的选择题回答
并使用自定义的prompt模板
但在回答时还是有部分回答 无法正确输出选项 其中大部分是无法找到知识库内容 小部分是找到知识库内容 但不输出选项 我该怎么做 提高标准问答的次数
To address the issues you're facing with generating single-choice answers from your knowledge base and ensuring responses are correctly formatted, consider the following steps:
-
Adjust
top_kandscore_thresholdValues: Experiment with these parameters to improve the system's ability to find relevant content. A highertop_kvalue increases the number of documents considered, potentially improving relevance at the cost of performance. Adjusting thescore_thresholdcan help balance between relevance and recall of the documents retrieved. -
Customize the Prompt Template: Modify your prompt template to explicitly instruct the model to format its responses as single-choice questions. This involves specifying in the template that responses should be concise and formatted as a single-choice question without additional explanations.
-
Implement Conditional Logic in the Template: Use conditional logic to ensure responses are formatted as single-choice questions. If the content does not support this format, the template should guide the model to state that the question cannot be answered based on the available information.
-
Refine the Template for Single-Choice Questions: Ensure your template is tailored to encourage concise and direct answers in the format of a single-choice question. This might involve specifying the question and options format clearly in the template.
-
Adjust the Retrieval Logic: Update the logic that selects the prompt template to ensure it uses the new single-choice question template when appropriate. This ensures that the model consistently returns answers in the desired format.
By following these steps, you should be able to improve the consistency and relevance of the single-choice answers generated by your system.
Details
Sources
- configs/prompt_config.py.example
- server/agent/tools/search_knowledgebase_complex.py
- server/chat/knowledge_base_chat.py
- server/chat/utils.py
- server/model_workers/gemini.py
- server/knowledge_base/kb_service/base.py
- server/knowledge_base/kb_service/zilliz_kb_service.py
- server/knowledge_base/kb_doc_api.py
- knowledge_base/samples/content/test_files/langchain-ChatGLM_open.jsonl
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
现在就是吧,知识库的时候历史的作用很弱,所以你只能在提示词模板哪指定严格按照知识库回答