paper-qa
paper-qa copied to clipboard
API issues
from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": } } ] }
answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3", llm_config=local_llm_config, summary_llm="ollama/llama3", summary_llm_config=local_llm_config, ), )
When I run the above code, I get an error. The error is as follows
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Traceback (most recent call last):
File "D:\Study\论文\检索增强\paper-qa-main\1.py", line 16, in
I found a similar issue that might help you resolve the error you're encountering. The error you're seeing suggests that the ask function is causing issues with asynchronous operations. To resolve this, you should use the agent_query function instead of ask. The ask function is a synchronous wrapper, and for asynchronous operations, you need to use the agent_query function directly.
Here is an example of how to use agent_query for asynchronous operations:
from paperqa import Settings, agent_query, QueryRequest
async def main():
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/llama3",
"api_base": "http://localhost:11434"
}
}
]
}
answer = await agent_query(
QueryRequest(
query="What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="ollama/llama3",
llm_config=local_llm_config,
summary_llm="ollama/llama3",
summary_llm_config=local_llm_config,
),
)
)
print(answer.formatted_answer)
# Run the async function
import asyncio
asyncio.run(main())
This code snippet sets up the agent_query function with the appropriate settings for the ollama/llama3 model and litellm settings, and runs it asynchronously. The ask function internally uses get_loop().run_until_complete to run the agent_query function synchronously, which can cause issues in an already running event loop. Therefore, using agent_query directly in an asynchronous context is the correct approach [1][2].
To continue talking to Dosu, mention @dosu.
Based on your error, it looks like you hit CTRL-C (per the KeyboardInterrupt) during the run
import os os.environ["OPENAI_API_KEY"] = "EMPTY"
Hello, may I ask if this is to solve openai_ error?
Hello @FahaxikiHoney, could you solve your issue?
I will close the issue due to its inactivity. However, you can reopen it if you need to follow up.