deer-flow icon indicating copy to clipboard operation
deer-flow copied to clipboard

Getting errors when trying to run uv run main,py

Open RYG81 opened this issue 8 months ago • 9 comments

Traceback (most recent call last): File "E:\0000AITools\deer-flow\main.py", line 14, in from src.workflow import run_agent_workflow_async File "E:\0000AITools\deer-flow\src\workflow.py", line 6, in
from src.graph import build_graph File "E:\0000AITools\deer-flow\src\graph_init_.py", line 4, in
from .builder import build_graph_with_memory, build_graph File "E:\0000AITools\deer-flow\src\graph\builder.py", line 8, in
from .nodes import ( File "E:\0000AITools\deer-flow\src\graph\nodes.py", line 14, in
from src.agents.agents import coder_agent, research_agent, create_agent
File "E:\0000AITools\deer-flow\src\agents_init_.py", line 4, in from .agents import research_agent, coder_agent File "E:\0000AITools\deer-flow\src\agents\agents.py", line 13, in
from src.llms.llm import get_llm_by_type File "E:\0000AITools\deer-flow\src\llms\llm.py", line 48, in
basic_llm = get_llm_by_type("basic") ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\0000AITools\deer-flow\src\llms\llm.py", line 42, in get_llm_by_type llm = _create_llm_use_conf(llm_type, conf) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\0000AITools\deer-flow\src\llms\llm.py", line 27, in _create_llm_use_conf return ChatOpenAI(**llm_conf) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\0000AITools\deer-flow.venv\Lib\site-packages\langchain_core\load\serializable.py", line 125, in init super().init(*args, **kwargs) File "E:\0000AITools\deer-flow.venv\Lib\site-packages\pydantic\main.py", line 214, in init validated_self = self.pydantic_validator.validate_python(data, self_instance=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\0000AITools\deer-flow.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 600, in validate_environment self.root_client = openai.OpenAI(**client_params, **sync_specific) # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\0000AITools\deer-flow.venv\Lib\site-packages\openai_client.py", line 114, in init raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

RYG81 avatar May 13 '25 09:05 RYG81

Have you edited the conf.yaml to confige the model with api_key ?

Albertyao1993 avatar May 13 '25 09:05 Albertyao1993

I am trying it with ollama

BASIC_MODEL: model: "huihui_ai/qwen3-abliterated:4b" base_url: "http://localhost:11434" # Local service address of Ollama, which can be started/viewed via ollama serve

this is my updated yaml

RYG81 avatar May 13 '25 12:05 RYG81

Is my settings right?

RYG81 avatar May 14 '25 08:05 RYG81

From what i have observed the error is caused by a lacking api_key. You can either pass the api_key or set the OPENAI_API_KEY an environment variable.

glavena avatar May 14 '25 10:05 glavena

make sure you have the correct config files.

https://github.com/bytedance/deer-flow/blob/main/docs/configuration_guide.md

leohxj avatar May 17 '25 00:05 leohxj

not sure if correct but youre saying youre using ollama, i had to replace this: https://github.com/bytedance/deer-flow/blob/3ed70e11d512718298a307c4c96673071a9dd1ae/src/llms/llm.py#L7

with from langchain_ollama import ChatOllama and replace ChatOpenAI where needed

0xlws2 avatar May 22 '25 14:05 0xlws2

To follow up on @0xlws2's comment:

I did the following to get it to work on my local ollama server:

// install the "langchain_ollama" package as it's not already included
uv add langchain_ollama

In the config.yaml file, point the config to the ollama server. Your details will vary

BASIC_MODEL:
  base_url: http://192.168.2.94:11434
  model: "llama3.3:70b"
  api_key: xxxx

// It complained when I tried 'ollama/llama3.3:70b', so I changed
// it to just 'llama3.3:70b' and it worked

In llm.py I added:

from langchain_ollama import ChatOllama

Right after: https://github.com/bytedance/deer-flow/blob/3ed70e11d512718298a307c4c96673071a9dd1ae/src/llms/llm.py#L7

And then commented out the references to ChatOpenAI and replaced with ChatOllama versions:

Example:

    # return ChatOpenAI(**llm_conf)
    return ChatOllama(**llm_conf)

There were three instances of ChatOpenAI that I replaced with ChatOllama.

I suspect this is a hacky approach. The documentation indicate that it should be possible to just point the config.yaml file to local ollama, but I didn't have any luck with the defaults.

filmo avatar May 24 '25 01:05 filmo

@filmo Hi, I have pulled llama 2 locally and made the changes that you suggested above. I tried to check whether I can call the model and it works with the following code: `import requests

def ollama_chat(messages, model="llama2", base_url="http://localhost:11434"): url = f"{base_url}/api/chat" payload = { "model": model, "messages": messages, "stream": False } response = requests.post(url, json=payload) response.raise_for_status() return response.json()

Usage:

messages = [{"role": "user", "content": "Hello!"}] result = ollama_chat(messages)`

the conf.yaml files look like the following on my side: BASIC_MODEL: base_url: "http://localhost:11434" model: "llama2" api_key: "ollama_chat" # some wrote that it should be filled in even though not relevant

but I still get an error to access the model:

deer-flow\src\graph\nodes.py", line 219, in coordinator_node .invoke(messages) raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: 404 page not found (status code: 404) During task with name 'coordinator' and id xxx I looked at the guide here, and tried to change the config in different ways but no luck. Any suggestion?

AffDk avatar Jun 05 '25 04:06 AffDk

@AffDk, current deer-flow is using langchain to call the LLM. I don't think the LiteLLM document is working for you.

WillemJiang avatar Jul 13 '25 06:07 WillemJiang