private-gpt
private-gpt copied to clipboard
Answer not in documents
How can I make it stop answering a question whose answer does not exist in the source documents. I mean can we have it return a template such as 'This question cannot be answered based on the information provided in the documents'? It seems that it uses its base knowledge to provide answers to even the stuff that are not present in the source documents!!
I don't think so, I'm pretty sure the model is the one that controls it so it would depend on what model you're using
I don't think you can do that. The model looks at the embedded docs, but then also uses its own content. So if you ask What is the Capital of Italy, it'll answer that (assuming that's not in your docs). Keep in mind is that the result from the docs goes to the language model as input/context.
wonder if there is some way to prompt the model to answer by quoting from source instead?
Update. I tried adding the following (prompt) to the privateGPT.py file. Still answering things based on general knowledge!!!
from langchain.prompts import PromptTemplate
def main():
prompt_template = """Answer the following question. If you don't know the answer, just answer: 'Not enough information to answer'.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx)
db = Chroma(persist_directory=persist_directory, embedding_function=llama, client_settings=CHROMA_SETTINGS)
retriever = db.as_retriever()
# Prepare the LLM
callbacks = [StreamingStdOutCallbackHandler()]
match model_type:
case "LlamaCpp":
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
case "GPT4All":
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
case _default:
print(f"Model {model_type} not supported!")
exit;
chain_type_kwargs = {"prompt": PROMPT}
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
GPT3.5 and GPT4 are good at following such detailed instructions, but I've never seen other models do so well.
One key aspect I've found in our related project https://github.com/h2oai/h2ogpt is to avoid any prompt at all. Just rely upon the instruct tuning entirely.
Otherwise, I've found that the model can use parts of the prompt or system prompt in the answer. That is, the model doesn't really know what is context vs. prompt, and the more creative you get, the worse it can be.
So I recommend doing what we do, just drop the prompt entirely and rely upon the instruct tuning. If the model doesn't know after providing context and answers from its pretraining or fine-tuning, then need better model or one that has been fine-tuned for this context-based task.
Thanks @pseudotensor pseudotensor. So do you mean h2ogpt has the capability to avoid providing answers if the information couldn't be found in the source PDF files?
Thanks @pseudotensor pseudotensor. So do you mean h2ogpt has the capability to avoid providing answers if the information couldn't be found in the source PDF files?
No, that will be similar to privateGPT. Just h2oGPT won't wrongly lend the long prompt with the context.