paper-qa icon indicating copy to clipboard operation
paper-qa copied to clipboard

Incomplete documentation for local usage

Open nshoman opened this issue 1 year ago • 8 comments

I'm trying to get paper-qa working locally, but the documentation doesn't properly show how that should be done. I'm working from the documentation for local usage that calls for using the ask function.

The documentation states that the LLM agent is default, but fake is the default. The constructor for AgentSettings specifies as much:

In settings.py:

    agent_type: str = Field(
        default="fake",
        description="Type of agent to use",
    )

The behavior of the fake agent from the documentation isn't clear, but it doesn't respect any settings passed to it. The fake agent performs a search query.

In main.py no llm settings are passed besides the model name:

    for search in await litellm_get_search_query(
        question, llm=query.settings.llm, count=3
    ):

Then, the litellm_get_search_query tries to spin up a LiteLLM model (the fake agent) with no parameters from the user except for the model name. This is already a problem if you're trying to use the model locally as it'll try to spin up OpenAI as a provider.

In litellm_get_search_query contained in helpers_.py:

    model = LiteLLMModel(name=llm)
    model.config["model_list"][0]["litellm_params"].update({"temperature": temperature})
    result = await model.run_prompt(
        prompt=search_prompt,
        data={"question": question, "count": count},
        skip_system=True,
    )

Solution: The ask function for local usage needs to have AgentSettings manually defined (also note the selection of ToolSelector for agent type, presumably the expected default:

answer = ask(
    "<YOUR_QUERY>",
    settings=Settings(
        llm="<YOUR_MODEL_NAME>",
        llm_config=local_llm_config,
        summary_llm="<YOUR_MODEL_NAME>",
        summary_llm_config=local_llm_config,
        paper_directory="<YOUR_PAPER_DIR>",
        agent=AgentSettings(agent_llm_config=local_llm_config,
                            agent_llm="<YOUR_MODEL_NAME>",
                            agent_type="ToolSelector"),
        answer=AnswerSettings(evidence_k=3) #optional
        
        
    ),
)

Related: #428, #390

nshoman avatar Sep 18 '24 11:09 nshoman

Hey @nshoman! 👋 I'm here to help you with any questions, bugs, or contributions you have regarding paper-qa. I'm a bot that can assist you while you're waiting for a human maintainer.

I found a similar unsolved discussion that might be relevant to your issue: Local LLM issue, which was last updated on 2024-09-13 [1].

To properly configure the ask function for local usage in paper-qa, you need to ensure that the Settings object passed to the ask function is correctly configured to use a local agent instead of the default fake agent that attempts to use OpenAI as a provider.

Here is an example of how you can configure the Settings object for local usage:

from paperqa.settings import Settings, AgentSettings, AnswerSettings
from paperqa.agents import ask

# Define your local LLM configuration
local_llm_config = {
    "model_list": [
        {
            "name": "<YOUR_MODEL_NAME>",
            "litellm_params": {
                "temperature": 0.7  # Example parameter, adjust as needed
            }
        }
    ]
}

# Create a Settings object with the desired configuration
settings = Settings(
    llm="<YOUR_MODEL_NAME>",
    llm_config=local_llm_config,
    summary_llm="<YOUR_MODEL_NAME>",
    summary_llm_config=local_llm_config,
    paper_directory="<YOUR_PAPER_DIR>",
    agent=AgentSettings(
        agent_llm_config=local_llm_config,
        agent_llm="<YOUR_MODEL_NAME>",
        agent_type="ToolSelector"
    ),
    answer=AnswerSettings(evidence_k=3)  # Optional
)

# Use the ask function with the configured settings
response = ask("<YOUR_QUERY>", settings)
print(response)

Ensure that the agent_type is set to "ToolSelector" or any other appropriate value that corresponds to a local agent configuration. Adjust other settings as necessary to match your local setup [2][3].

This should help you get paper-qa working locally with the correct configuration.

To continue talking to Dosu, mention @dosu.

dosubot[bot] avatar Sep 18 '24 11:09 dosubot[bot]

I actually think what I posted is just a workaround and the real solution is passing in the proper LLM config into helpers. If that's agreeable I can prepare a PR to fix the issue

nshoman avatar Sep 18 '24 12:09 nshoman

Unfortunately, I do not get further with your workaround.

from paperqa import Settings, ask

local_llm_config = dict(
    model_list=[dict(
        model_name="my_llm_model",
        litellm_params=dict(
            model="my-llm-model",
            api_base="http://localhost:8080/v1",
            api_key="sk-no-key-required",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)


from paperqa.settings import Settings, AgentSettings, AnswerSettings

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="my-llm-model",
        llm_config=local_llm_config,
        summary_llm="my-llm-model",
        summary_llm_config=local_llm_config,
        paper_directory="~/paper-qa/Documents",
        agent=AgentSettings(agent_llm_config=local_llm_config,
                            agent_llm="my-llm-model",
                            agent_type="ToolSelector"),
        answer=AnswerSettings(evidence_k=3) #optional
        
        
    ),
)

results in


Traceback (most recent call last):
  File "/home/macro/paper-qa/test_local_llm.py", line 22, in <module>
    answer = ask(
             ^^^^
  File "/home/macro/paper-qa/paperqa/agents/__init__.py", line 92, in ask
    return get_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/macro/paper-qa/paperqa/agents/main.py", line 62, in agent_query
    response = await run_agent(docs, query, agent_type, **runner_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/paper-qa/paperqa/agents/main.py", line 109, in run_agent
    elif tool_selector_or_none := query.settings.make_aviary_tool_selector(agent_type):
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/paper-qa/paperqa/settings.py", line 568, in make_aviary_tool_selector
    acompletion=self.get_agent_llm().router.acompletion,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/paper-qa/paperqa/llms.py", line 421, in router
    self._router = Router(
                   ^^^^^^^
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 346, in __init__
    self.set_model_list(model_list)
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 4015, in set_model_list
    self._create_deployment(
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 3975, in _create_deployment
    deployment = self._add_deployment(deployment=deployment)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 4052, in _add_deployment
    ) = litellm.get_llm_provider(
        ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 502, in get_llm_provider
    raise e
  File "/home/macro/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 479, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=my-llm-model
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

macromeer avatar Sep 18 '24 13:09 macromeer

@macromeer I think you need to append the provider (based on the endpoint) to the model name in your case. So if it's an openai compliant endpoint you'd have "openai/my-llm".

The error above is less about paper-qa and more about the LiteLLM configuration. Below is a MWE you can use to debug your setup:

import litellm
response = litellm.completion(
    model = 'openai/model-name',
    api_base='',
    api_key='',
    messages=[
                {
                    "role": "user",
                    "content": "Hey, how's it going?",
                }
    ],
)
print(response)

If you can get that working, than those settings should also fix the error in paper-qa.

nshoman avatar Sep 18 '24 13:09 nshoman

Thank you for the clarification. Afaik in the case of the local llm, which in the README example is https://github.com/Mozilla-Ocho/llamafile, it is an openai-compatible endpoint, so I executed the following code, which resulted in a NotFoundError (below):

from paperqa import Settings, ask

local_llm_config = dict(
    model_list=[dict(
        model_name="openai/my_llm_model",
        litellm_params=dict(
            model="openai/my-llm-model",
            api_base="http://127.0.0.1:8080/",
            api_key="sk-no-key-required",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)


from paperqa.settings import Settings, AgentSettings, AnswerSettings

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="openai/my-llm-model",  # Use the same model name as in local_llm_config
        llm_config=local_llm_config,
        summary_llm="openai/my-llm-model",  # Update this as well
        summary_llm_config=local_llm_config,
        paper_directory="~/paper-qa/Documents",
        agent=AgentSettings(agent_llm_config=local_llm_config,
                            agent_llm="openai/my-llm-model",  # Update this
                            agent_type="ToolSelector"),
        answer=AnswerSettings(evidence_k=3)
    ),
)

NotFoundError: litellm.NotFoundError: NotFoundError: OpenAIException 
           - File Not Found                                                     
           Received Model Group=openai/my-llm-model                             
           Available Model Group Fallbacks=None  

macromeer avatar Sep 18 '24 16:09 macromeer

Hi - we're working on getting this documentation improved. In the meantime - I believe you'll get better results without using an agent from a local model and instead just do a linear workflow:

from paperqa import Settings, ask

local_llm_config = dict(
    model_list=[dict(
        model_name="openai/my_llm_model",
        litellm_params=dict(
            model="openai/my-llm-model",
            api_base="http://127.0.0.1:8080/",
            api_key="sk-no-key-required",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)


from paperqa.settings import Settings, AgentSettings, AnswerSettings

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="openai/my-llm-model",  # Use the same model name as in local_llm_config
        llm_config=local_llm_config,
        summary_llm="openai/my-llm-model",  # Update this as well
        summary_llm_config=local_llm_config,
        paper_directory="~/paper-qa/Documents",
        agent=AgentSettings(agent_type="fake"),
        answer=AnswerSettings(evidence_k=3)
    ),
)

whitead avatar Sep 18 '24 19:09 whitead

Hi - we're working on getting this documentation improved. In the meantime - I believe you'll get better results without using an agent from a local model and instead just do a linear workflow:

from paperqa import Settings, ask

local_llm_config = dict(
    model_list=[dict(
        model_name="openai/my_llm_model",
        litellm_params=dict(
            model="openai/my-llm-model",
            api_base="http://127.0.0.1:8080/",
            api_key="sk-no-key-required",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)


from paperqa.settings import Settings, AgentSettings, AnswerSettings

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm="openai/my-llm-model",  # Use the same model name as in local_llm_config
        llm_config=local_llm_config,
        summary_llm="openai/my-llm-model",  # Update this as well
        summary_llm_config=local_llm_config,
        paper_directory="~/paper-qa/Documents",
        agent=AgentSettings(agent_type="fake"),
        answer=AnswerSettings(evidence_k=3)
    ),
)

The problem is even with agent_type="fake", paperqa still tries to spin up an OpenAI agent. As I said in the OP I'm not sure what the "fake" agent is supposed to do, but it's still spinning up an OpenAI agent.

run_fake_agent calls litellm_get_search_query as follows:

    for search in await litellm_get_search_query(
        question, agent_settings=query.settings.agent, count=3
    ):

litellm_get_search_query attempts to create an OpenAI agent:

    model = LiteLLMModel(name=llm)
    model.config["model_list"][0]["litellm_params"].update({"temperature": temperature})
    result = await model.run_prompt(
        prompt=search_prompt,
        data={"question": question, "count": count},
        skip_system=True,
    )

It seems that the intent is 'fake' agent doesn't spawn an agent, but in actuality, it tries to.

nshoman avatar Sep 18 '24 20:09 nshoman

What @whitead suggested, does not work (see error message below). I have also tried all models listed in https://docs.litellm.ai/docs/providers/openai and they are either not available or I have insufficient quota. So there is no free openai model available to test paperqa2.

import os

os.environ['OPENAI_API_KEY'] =  # enter your key here

MODEL_NAME = "gpt-4"

from paperqa import Settings, ask

local_llm_config = dict(
    model_list=[dict(
        model_name=f"openai/{MODEL_NAME}",
        litellm_params=dict(
            model=f"openai/{MODEL_NAME}",
            api_base="http://127.0.0.1:8080/",
            api_key="sk-no-key-required",
            temperature=0.1,
            frequency_penalty=1.5,
            max_tokens=512,
        ),
    )]
)


from paperqa.settings import Settings, AgentSettings, AnswerSettings

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm=f"openai/{MODEL_NAME}",
        llm_config=local_llm_config,
        summary_llm=f"openai/{MODEL_NAME}",
        summary_llm_config=local_llm_config,
        paper_directory="~/paper-qa/Documents",
        agent=AgentSettings(agent_type="fake"),
        answer=AnswerSettings(evidence_k=3)
    ),
)

print(answer)
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Traceback (most recent call last):
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 961, in acompletion
    headers, response = await self.make_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 658, in make_openai_chat_completion_request
    raise e
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 646, in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/openai/_legacy_response.py", line 367, in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1412, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/openai/_base_client.py", line 1816, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/openai/_base_client.py", line 1510, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/openai/_base_client.py", line 1611, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `my-llm-model` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/main.py", line 425, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/llms/OpenAI/openai.py", line 1008, in acompletion
    raise OpenAIError(
litellm.llms.OpenAI.openai.OpenAIError: Error code: 404 - {'error': {'message': 'The model `my-llm-model` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/marco/paper-qa/test.py", line 27, in <module>
    answer = ask(
             ^^^^
  File "/home/marco/paper-qa/paperqa/agents/__init__.py", line 92, in ask
    return get_loop().run_until_complete(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/agents/main.py", line 62, in agent_query
    response = await run_agent(docs, query, agent_type, **runner_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/agents/main.py", line 108, in run_agent
    answer, agent_status = await run_fake_agent(query, docs, **runner_kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/agents/main.py", line 163, in run_fake_agent
    for search in await litellm_get_search_query(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/agents/helpers.py", line 56, in litellm_get_search_query
    result = await model.run_prompt(
             ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/llms.py", line 182, in run_prompt
    return await self._run_chat(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/llms.py", line 235, in _run_chat
    chunk = await self.achat(messages)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/paper-qa/paperqa/llms.py", line 452, in achat
    response = await self.router.acompletion(self.name, messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 735, in acompletion
    raise e
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 723, in acompletion
    response = await self.async_function_with_fallbacks(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 3039, in async_function_with_fallbacks
    raise original_exception
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 2893, in async_function_with_fallbacks
    response = await self.async_function_with_retries(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 3099, in async_function_with_retries
    self.should_retry_this_error(
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 3210, in should_retry_this_error
    raise error
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 3083, in async_function_with_retries
    response = await original_function(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 874, in _acompletion
    raise e
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/router.py", line 846, in _acompletion
    response = await _response
               ^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/utils.py", line 1595, in wrapper_async
    raise e
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/utils.py", line 1415, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/main.py", line 447, in acompletion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/utils.py", line 8196, in exception_type
    raise e
  File "/home/marco/miniforge3/envs/paperqa2/lib/python3.11/site-packages/litellm/utils.py", line 6438, in exception_type
    raise NotFoundError(
litellm.exceptions.NotFoundError: litellm.NotFoundError: OpenAIException - Error code: 404 - {'error': {'message': 'The model `my-llm-model` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Received Model Group=openai/my-llm-model
Available Model Group Fallbacks=None

macromeer avatar Sep 19 '24 09:09 macromeer