paper-qa icon indicating copy to clipboard operation
paper-qa copied to clipboard

Docs() Conflicts with LiteLLM Models

Open CGH20171006 opened this issue 1 year ago • 2 comments

When I want to use the litellm model, add in the Docs method doesn't seem to work. An error is reported when using this method, prompting me to do an OPENAI_API_KEY definition. But at this time I am trying to use the liteLLM API.

import os

from paperqa import Docs,ask
from paperqa.settings import Settings, AgentSettings, AnswerSettings

local_llm_config = dict(
    model_list=[dict(
        model_name="my_llm_model",
        litellm_params=dict(
            model="gpt-3.5-turbo",
            api_base="https://chatapi.midjourney-vip.cn/v1",
            api_key="sk-d55CRFWuSZtCU6Nv7a3505525a9b4b0f820f215b0545504d",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)
settings = Settings(
    llm="my_llm_model",
    llm_config=local_llm_config,
    summary_llm="my_llm_model",
    summary_llm_config=local_llm_config,
    agent=AgentSettings(agent_llm_config=local_llm_config,
                        agent_llm="my_llm_model",
                        agent_type="ToolSelector"),
    answer=AnswerSettings(evidence_k=3)  # optional
)

doc_paths = ("E:\Programing\pythonProject\myfile.pdf", "E:\Programing\pythonProject\myotherfile.pdf")

docs = Docs()

for doc in doc_paths:
    docs.add(doc)

answer = docs.query(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=settings,
)
print(answer.formatted_answer)

error message:

Traceback (most recent call last):
  File "D:\Study\PyCharm 2023.3.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
    coro = func()
           ^^^^^^
  File "<input>", line 1, in <module>
  File "D:\Study\PyCharm 2023.3.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\PyCharm 2023.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "E:\Programing\pythonProject\Paq.py", line 35, in <module>
    docs.add(doc)
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\docs.py", line 227, in add
    return get_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\asyncio\base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\docs.py", line 275, in aadd
    result = await llm_model.run_prompt(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\llms.py", line 182, in run_prompt
    return await self._run_chat(
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\llms.py", line 235, in _run_chat
    chunk = await self.achat(messages)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\llms.py", line 452, in achat
    response = await self.router.acompletion(self.name, messages)
                     ^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\paperqa\llms.py", line 421, in router
    self._router = Router(
                   ^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\litellm\router.py", line 346, in __init__
    self.set_model_list(model_list)
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\litellm\router.py", line 4016, in set_model_list
    self._create_deployment(
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\litellm\router.py", line 3976, in _create_deployment
    deployment = self._add_deployment(deployment=deployment)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\litellm\router.py", line 4105, in _add_deployment
    set_client(
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\litellm\router_utils\client_initalization_utils.py", line 441, in set_client
    _client = openai.AsyncOpenAI(  # type: ignore
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Study\Anaconda\envs\Py311\Lib\site-packages\openai\_client.py", line 319, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

CGH20171006 avatar Sep 21 '24 16:09 CGH20171006

I am having a similar problem, where following the docs for an ollama local model, paper-qa still tries to use my OpenAI key which responds with an over quota reply. It seems like even with local model settings, it still tries to call a remote API.

superlou avatar Sep 21 '24 18:09 superlou

Yes. same issue.

hweiske avatar Oct 01 '24 08:10 hweiske

Hey @CGH20171006 , @superlou , and @hweiske

I hope you have found the solution already. As it is a recurrent question, I created this tutorial showing how to change models in PaperQA.

The problem you encounter is that you are not setting an embedding model. Then PaperQA uses the default, which is text-embedding-3-small, from OpenAI.

maykcaldas avatar Mar 13 '25 01:03 maykcaldas