paper-qa
paper-qa copied to clipboard
High accuracy RAG for answering questions from scientific documents with citations
Hello, I am hoping that someone can help me build an index such that I specify attributes like a Doc's name, citation, etc. I can naively define a manifest.csv with...
Hi all, Thank you for your excellent work on PaperQA2. OpenAI provides an option for a JSON schema that the LLM must respond with. (https://openai.com/index/introducing-structured-outputs-in-the-api/) Is it possible to pass...
> The synchronous version just calls the async version in a loop. Per the docs: https://futurehouse.gitbook.io/futurehouse-cookbook/paperqa#async This can cause issues if there is already an event loop running since the...
Hi, Is there a way (simple tool) to extract BibTeX informations from a pdf document using a llm and embedding model. I know my question isn't 'directly' related to paperQA...
Hi, I try to configure LiteLLM for enabling caching and streaming with LiteLLM. ``` # -*- coding: utf-8 -*- from paperqa.settings import Settings, AgentSettings, AnswerSettings from paperqa.agents import ask, build_index...
Hello, I created a manifest file as follows, before indexing : ``` def make_manifest_file(path,name='Manifest.csv',delimiter=','): with open(name, 'w', newline='') as file: writer = csv.writer(file,delimiter=delimiter) field = ["title", "DOI", "title"] writer.writerow(field) for...
https://github.com/Future-House/paper-qa/discussions/854 is asking where the index is stored. It would be nice to support this in the CLI, printing things like: - Index directory - Indexes built
Hi, I was wondering if its possible to add a feature to enable the use of Anthropic citations? Thank you very much in advance!
Hi, I provided a set of documents to perform an agentic query. I know there are elements in document set that can be used in answer but it seems paper-qa...
Hello, I noticed PDF parsing leads to multiple carriage returns and multiples spaces (sometimes huge) inside citations. > Request to litellm: > litellm.acompletion(api_base='http://192.168.1.15:11434', model='ollama/dolphin-mixtral:8x7b', api_type='ollama', stream=False, caching=False, cache=True, mirostat=0, mirostat_eta=0.1,...