paper-qa icon indicating copy to clipboard operation
paper-qa copied to clipboard

High accuracy RAG for answering questions from scientific documents with citations

Results 248 paper-qa issues
Sort by recently updated
recently updated
newest added

This PR contains the following updates: | Update | Change | |---|---| | lockFileMaintenance | All locks refreshed | 🔧 This Pull Request updates lock files to use the latest...

size:XS

About your locally hosted example ``` from paperqa import Settings, ask local_llm_config = dict( model_list=[ dict( model_name="my_llm_model", litellm_params=dict( model="my_llm_model", api_base="http://127.0.0.1:8080/", api_key="sk-no-key-required", temperature=0.1, frequency_penalty=1.5, max_tokens=512, ), ) ] ) answer =...

question

from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": } } ] } answer = ask(...

bug

Why show “ModuleNotFoundError: No module named 'paperqa.version' “

bug

I want to be able to pinpoint the exact sentence(s) in a document where data was extracted from so that I can cross-check for accuracy. `answer.context()` gives summarized chunks (after...

question

LiteLLM's rate limits weren't suitable for PaperQA in that we wanted rate limits that could span models. This PR adds them in with both an in-memory based rate limiter, as...

enhancement
size:XXL
lgtm

It's good to keep our configs DRY because it enables updated defaults to apply to all configs, without doing anything

enhancement
good first issue

hi there . i have question for you . due my country restriction i can't get semantic or crossfy api . is there way that in that i cant deactivated...

question

Hello, I'm trying to use paper-qa with a "mixtral-8x7b-instruct-v0.1.Q4_K_M" on a local network. The LLM executable llamafile is launched with "-cb -np 4 -a my-llm-model --embedding" options as described in...

bug