paper-qa
paper-qa copied to clipboard
LLM
from paperqa import Settings, ask import os os.environ["OPENAI_API_KEY"] = "EMPTY" local_llm_config = { "model_list": [ { "model_name": "ollama/llama3", "litellm_params": { "model": "ollama/llama3", "api_base": ""https://ap" } } ] }
answer = ask( "What manufacturing challenges are unique to bispecific antibodies?", settings=Settings( llm="ollama/llama3", llm_config=local_llm_config, summary_llm="ollama/llama3", summary_llm_config=local_llm_config, ), )
I want to change the GPT model to llama3, the code is as above, but an error occurred when running it, and the error is as follows.
raise client_error(req.connection_key, exc) from excaiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:11434 ssl:default [远程计算机拒绝网络连接。]
Received Model Group=ollama/llama3 Available Model Group Fallbacks=None LiteLLM Retried: 2 times, LiteLLM Max Retries: 3
I found a similar unsolved discussion that might be relevant to your issue: Local LLM issue [1].
The error you're encountering, ClientConnectorError: Cannot connect to host localhost:11434, suggests that the application is trying to connect to a local server that is either not running or not accessible. Here are some steps to troubleshoot and resolve this issue:
-
Check if the Local Server is Running: Ensure that the server hosting the
ollama/llama3model is running and accessible atlocalhost:11434. You can verify this by trying to access the server directly via a web browser or using a tool likecurl. -
Verify API Base URL: The
api_baseparameter in your configuration is currently set to an empty string. You need to set it to the correct URL of the server hosting theollama/llama3model. For example:"api_base": "http://localhost:11434/v1" -
Network Configuration: Ensure that there are no firewall rules or network configurations blocking the connection to
localhost:11434. -
SSL Configuration: If the server requires SSL, ensure that the SSL certificates are correctly configured. If SSL is not required, you can disable it by setting the appropriate parameters in your configuration.
Here is an updated version of your code with the api_base parameter set:
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "EMPTY"
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/llama3",
"api_base": "http://localhost:11434/v1"
}
}
]
}
answer = ask(
"What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm="ollama/llama3",
llm_config=local_llm_config,
summary_llm="ollama/llama3",
summary_llm_config=local_llm_config,
),
)
If the issue persists, you might want to check the server logs for any errors or additional information that could help diagnose the problem.
To continue talking to Dosu, mention @dosu.
Hello @FahaxikiHoney ,
Were you able to resolve your issue with Dosu support? It looks like your Ollama server isn't running.
Since some time has passed, I’ll go ahead and close this issue. If you need further assistance with Ollama, feel free to join this discussion.