R2R icon indicating copy to clipboard operation
R2R copied to clipboard

Error getting embeddings: litellm.APIConnectionError: 'Logging' object has no attribute 'debug'

Open hic090 opened this issue 2 months ago • 0 comments

What happened?

An error (see log output below) occurs during the R2R extraction process of the knowledge graph. R2R uses litellm SDK and in addition I am using litellm proxy, so this bug report goes to both repos. I'll include two toml-files excerpts below.

Relevant log output


Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

2025-09-27 13:34:13 - ERROR - Error getting embeddings: litellm.APIConnectionError: 'Logging' object has no attribute 'debug'
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/litellm/main.py", line 3635, in aembedding
    response = await init_response  # type: ignore
               ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 90, in ollama_aembeddings
    return _process_ollama_embedding_response(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/litellm/llms/ollama/completion/handler.py", line 50, in _process_ollama_embedding_response
    logging_obj.debug(
    ^^^^^^^^^^^^^^^^^
AttributeError: 'Logging' object has no attribute 'debug'

My_config.toml

[app] fast_llm = "litellm_proxy/openrouter/openrouter-mistral-7b-instruct-free" quality_llm = "ollama/gemma3-12b-16k:latest" vlm = "litellm_proxy/ollama-llama3.2-vision" vlm = "ollama/llama3.2-vision-16k:latest" audio_lm = "litellm_proxy/whisper-1"

reasoning_llm = "openai/openrouter/google/gemini-2.5-flash" planning_llm = "openai/openrouter/google/gemini-2.5-flash"

[embedding] provider = "litellm" base_model = "ollama/jeffh/intfloat-multilingual-e5-large-instruct:f16" base_dimension = nan #base_dimension = 1024 batch_size = 48 concurrent_request_limit = 8 max_retries = 2 initial_backoff = 0.5 max_backoff = 32.0

[completion_embedding] provider = "litellm" base_model = "ollama/jeffh/intfloat-multilingual-e5-large-instruct:f16"

base_dimension = 1024

base_dimension = nan batch_size = 48 concurrent_request_limit = 8

[agent] tools = ["search_file_knowledge"]

[completion] provider = "litellm" concurrent_request_limit = 6

[completion.generation_config] temperature = 0.0 top_p = 1 max_tokens_to_sample = 3072 stream = false add_generation_kwargs = { stop = [""], num_ctx = 3072 }

[ingestion] provider = "unstructured_local"

strategy = "auto"

chunking_strategy = "by_title" new_after_n_chars = 1000 max_characters = 1000 combine_under_n_chars = 256 overlap = 200 chunks_for_document_summary = 64 #document_summary_model = "litellm_proxy/ollama-mistral-7b-instruct" document_summary_model = "ollama/gemma3-12b-16k:latest" automatic_extraction = false

[orchestration] provider = "hatchet" kg_creation_concurrency_limit = 32 ingestion_concurrency_limit = 8 kg_concurrency_limit = 8

[retrieval.weights] keyword = 0.3 vector = 0.5 graph = 0.2

[retrieval.limits] keyword = 20 vector = 50 graph = 30

rerank_model = "litellm_proxy/bge-reranker-large"

—-

My r2r.full.toml:

R2R_PORT=7272 R2R_HOST=0.0.0.0 R2R_LOG_LEVEL=INFO #R2R_CONFIG_NAME=my_config

full_ollama

R2R_CONFIG_PATH=/app/user_configs/my_config.toml R2R_PROJECT_NAME=r2r_default R2R_SECRET_KEY= [REDACTED] R2R_USER_TOOLS_PATH=/app/user_tools R2R_LOG_FORMAT= …

OpenAI

OPENAI_API_KEY= [REDACTED] OPENAI_API_BASE=http://192.168.178.12:4004

LiteLLM Proxy

LITELLM_PROXY_API_BASE=http://192.168.178.12:4004 LITELLM_PROXY_API_KEY=[REDACTED]

Ollama

OLLAMA_API_BASE=http://192.168.178.12:11444

hic090 avatar Sep 30 '25 11:09 hic090