llm-graph-builder
llm-graph-builder copied to clipboard
AttributeError when using locally deployed Qwen3:14b instance
I'm trying to use a locally deployed qwen3:14b instance for processing. Files can be uploaded just fine, however when I try to generate a graph, I get the following stack trace:
2025-05-19 09:50:52 2025-05-19 13:50:52,541 - File Failed in extraction: 'str' object has no attribute 'get' 2025-05-19 09:50:52 Traceback (most recent call last): 2025-05-19 09:50:52 File "/code/score.py", line 244, in extract_knowledge_graph_from_file 2025-05-19 09:50:52 uri_latency, result = await extract_graph_from_file_local_file(uri, userName, password, database, model, merged_file_path, file_name, allowedNodes, allowedRelationship, token_chunk_size, chunk_overlap, chunks_to_combine, retry_condition, additional_instructions) 2025-05-19 09:50:52 File "/code/src/main.py", line 242, in extract_graph_from_file_local_file 2025-05-19 09:50:52 return await processing_source(uri, userName, password, database, model, file_name, pages, allowedNodes, allowedRelationship, token_chunk_size, chunk_overlap, chunks_to_combine, True, merged_file_path, additional_instructions=additional_instructions) 2025-05-19 09:50:52 File "/code/src/main.py", line 389, in processing_source 2025-05-19 09:50:52 node_count,rel_count,latency_processed_chunk = await processing_chunks(selected_chunks,graph,uri, userName, password, database,file_name,model,allowedNodes,allowedRelationship,chunks_to_combine,node_count, rel_count, additional_instructions) 2025-05-19 09:50:52 File "/code/src/main.py", line 484, in processing_chunks 2025-05-19 09:50:52 graph_documents = await get_graph_from_llm(model, chunkId_chunkDoc_list, allowedNodes, allowedRelationship, chunks_to_combine, additional_instructions) 2025-05-19 09:50:52 File "/code/src/llm.py", line 213, in get_graph_from_llm 2025-05-19 09:50:52 graph_document_list = await get_graph_document_list( 2025-05-19 09:50:52 File "/code/src/llm.py", line 198, in get_graph_document_list 2025-05-19 09:50:52 graph_document_list = await llm_transformer.aconvert_to_graph_documents(combined_chunk_document_list) 2025-05-19 09:50:52 File "/usr/local/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 1031, in aconvert_to_graph_documents 2025-05-19 09:50:52 results = await asyncio.gather(*tasks) 2025-05-19 09:50:52 File "/usr/local/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup 2025-05-19 09:50:52 future.result() 2025-05-19 09:50:52 File "/usr/local/lib/python3.10/asyncio/tasks.py", line 232, in __step 2025-05-19 09:50:52 result = coro.send(None) 2025-05-19 09:50:52 File "/usr/local/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 957, in aprocess_response 2025-05-19 09:50:52 not rel.get("head") 2025-05-19 09:50:52 AttributeError: 'str' object has no attribute 'get'
My LLM_MODEL_CONFIG in backend.env looks like this (scrubbed of api endpoint):
LLM_MODEL_CONFIG_ollama_llama3_qwen2_5_14b="qwen2.5:14b-instruct-q4_K_M,<api_endpoint>
LLM_MODEL_CONFIG_ollama_llama3_qwen3_14b="qwen3:14b,<api_endpoint>"
And looks like this in docker-compose.yml: ` -LLM_MODEL_CONFIG_ollama_llama3_qwen2_5_14b=${LLM_MODEL_CONFIG_ollama_llama3_qwen2_5_14b-qwen2.5:14b-instruct-q4_K_M,<api_endpoint>}
- LLM_MODEL_CONFIG_ollama_llama3_qwen3_14b=${LLM_MODEL_CONFIG_ollama_llama3_qwen3_14b-qwen3:14b,<api_endpoint>} `
I also have qwen2.5:14b running and I can generate a graph using it without issues. I verified that the qwen3 endpoint works by using this command and getting a valid response from it, so I do not believe that is the issue. I am also using verison 0.8 of the llm graph builder
curl -X POST <api_endpoint> ^ -H "Content-Type: application/json" ^ -d "{ \"model\": \"qwen3:14b\", \"prompt\": \"Explain how photosynthesis works.\", \"stream\": false }"
Hi @RealPeterGriffin
Can you try with our latest version 0.8.2.
we have fixed some bugs where we are keeping "ignore_tool_usage" parameter as false for some models (qwen and deepseek as of now).
let us know if it works.
Hi @kaustubh-darekar
I have upgraded the graph builder to 0.8.2 and tried again with qwen3, and unfortunately got the same stack trace.
I saw in line 176 of the attached log file that the ignore tool usage parameter was set to false, so it must be something else.
Hi @RealPeterGriffin qwen is reasoning model which produces "thinking" block which interferes with structured output response for LLMGraphTransformer.
Currently this PR is WIP on chatollama . We will keep an eye on it for solution.
Hi @kaustubh-darekar
Thanks for getting back to me, please let me know if there are any updates.
Hi @kaustubh-darekar
Wanted to give you an update on this. I actually managed to get qwen3 active by serving the model through vLLM and changing the code in llm.py and the LLM_MODEL_CONFIG to accommodate that instead of ChatOllama. Thinking is still present but I do not get errors when processing files anymore.