paper-qa
paper-qa copied to clipboard
High accuracy RAG for answering questions from scientific documents with citations
Hi, do you have any guidance or examples for the `Settings` parameter in order to use paper QA with entirely AWS Bedrock models. I'm able to access by Bedrock's LLM...
Hello, I currently use paperQA with Llama3.1:70b served by Ollama. I can see in debug logs 'stream' parameter is set up as 'False' even if i pass it as 'True'...
I am trying to understand how documents are loaded, stored and embeddings created when using the `agent_query` method. This method works well and I would like to know what's working...
The new complete tool allows the agent to call it quits after trying many search iterations. This is great but if it finds no relevant evidence, the GenerateAnswer tool won't...
Hello, I currently use paperQA with Llama3.1:70b served by Ollama. With LLM default parameters, answers quality is often poor, especially when I increase `answer.evidence_k` and `answer.answer_max_sources`. With default answer settings,...
Trying to use Claude to index a folder from the command line. ```sh pqa \ --llm "anthropic/litellm-claude-3-5-sonnet-20241022" \ --summary_llm "anthropic/litellm-claude-3-5-sonnet-20241022" \ index . ``` But although my API key is...
When running the script, I encountered two main issues: 1. The library defaults to using the GPT-4o model despite setting the LLM to gpt-4o-mini, leading to higher costs. 2. I...
Has anyone tried this tool with scientific spec sheets like those provided for manufacturing tools, probes, measurement devices, etc. Is there anything in the way the models are prompted or...
Is there a recipe or script to exactly reproduce the steps used to create the WikiCrow articles for genes (e.g., https://wikicrow.ai/NOVA2)? I've seen the config file https://github.com/Future-House/paper-qa/blob/main/paperqa/configs/wikicrow.json, but how exactly...
Hello, I'm using Mixtral 8x7b Instruct with PaperQA but but I noticed that the quality of the answers is relatively random (sometimes good, sometimes very bad...). I read instruct models...