llm-graph-builder icon indicating copy to clipboard operation
llm-graph-builder copied to clipboard

Add Ollama local models

Open trasgoverde opened this issue 1 year ago • 15 comments

I have been trying to connect my Ollama model llama3.1 to the system without results :skull: , only working with OpenAI and Diffbot :100:. Can you give me instructions how to deal with this configuration?

Config in my .env:

Optional Backend

EMBEDDING_MODEL = "llama3.1:8b" IS_EMBEDDING = "true" KNN_MIN_SCORE = "0.94"

Enable Gemini (default is False) | Can be False or True

GEMINI_ENABLED = False LLM_MODEL_CONFIG_ollama_llama3="llama3.1:8b,http://localhost:11434"

Thanks in advance!!!

trasgoverde avatar Sep 05 '24 14:09 trasgoverde

@aashipandya

kartikpersistent avatar Sep 06 '24 03:09 kartikpersistent

Hi @trasgoverde Configurations which you have mentioned seemed to be correct. What is the exact issue? Are you not getting any nodes and relationships from llama 3.1 model or there is error in model initialization ?

aashipandya avatar Sep 06 '24 12:09 aashipandya

Screenshot 2024-09-12 at 12-11-50 Dipassio graph builder

Hi @trasgoverde Configurations which you have mentioned seemed to be correct. What is the exact issue? Are you not getting any nodes and relationships from llama 3.1 model or there is error in model initialization ?

Hi @aashipandya , I am not getting any node nor relationships from Llama3.1, the model is initiated and the endpoint/port is correct and working.

Dockerized Graph-llm-builder + Ollama(Llama3.1):

$docker ps -a

fab8bf607447   ollama/ollama                        "/bin/ollama serve"      About a minute ago   Up About a minute          0.0.0.0:11434->11434/tcp, :::11434->11434/tcp       ollama
48f25f7c7a43   graph-builder-main-frontend          "/docker-entrypoint.…"   6 days ago           Up 5 minutes               80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   frontend
586115aeab02   graph-builder-main-backend           "gunicorn score:app …"   6 days ago           Up 6 minutes               0.0.0.0:8000->8000/tcp, :::8000->8000/tcp           backend

Only works with Diffbot Model and OpenAI models, stills not working in my machine with local Ollama model.

Found error in the logs (I think is something related to model naming): File "/code/src/llm.py", line 91, in get_llm model_name, base_url = env_value.split(",") AttributeError: 'NoneType' object has no attribute 'split'

Thanks in advance!

trasgoverde avatar Sep 12 '24 10:09 trasgoverde

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created

naveen-420 avatar Sep 19 '24 04:09 naveen-420

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created

Hello @naveen-420, I still have the same problem as you, seems something about naming but I wasn´t able to fix it. May be @aashipandya can help with the issue.

Best,

trasgoverde avatar Sep 20 '24 11:09 trasgoverde

LLama 3.0 versions might give some nodes and relations but 3.1 versions are not extracting graph currently. Working on it, will update here when it will be available in dev

aashipandya avatar Sep 20 '24 11:09 aashipandya

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created

Hello @naveen-420, I still have the same problem as you, seems something about naming but I wasn´t able to fix it. May be @aashipandya can help with the issue.

Best,

I think this error is due to the configuration entry backend -->.env image

zhanglt avatar Sep 23 '24 05:09 zhanglt

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created

Hello @naveen-420, I still have the same problem as you, seems something about naming but I wasn´t able to fix it. May be @aashipandya can help with the issue. Best,

I think this error is due to the configuration entry backend -->.env image

Hi @zhanglt,

Did it work after your .env change? Are you getting any nodes and relationships from llama 3.1 locally installed? Not working for me after changes.... :skull: :skull:

Best,

trasgoverde avatar Sep 25 '24 08:09 trasgoverde

yes i also think it's a config issue, please follow the config example in the docs.

jexp avatar Sep 26 '24 00:09 jexp

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created.

Hello @naveen-420, I still have the same problem as you, seems something about naming but I wasn´t able to fix it. May be @aashipandya can help with the issue. Best,

I think this error is due to the configuration entry backend -->.env image

Hi @zhanglt, if you are able to generate entities(nodes and relationships) with config change and ollama3.1 running locally, can you please share your complete config and ollama3.1 model version you are using. as @trasgoverde mentioned i'm also not able to generate any nodes and relationships with config change.Thanks

naveen-420 avatar Sep 26 '24 11:09 naveen-420

Followed example in the docs. Error: Failed To Process File:crewAI or LLM Unable To Parse Content 'NoneType' object has no attribute 'split'

trasgoverde avatar Sep 26 '24 17:09 trasgoverde

Followed example in the docs. Error: Failed To Process File:crewAI or LLM Unable To Parse Content 'NoneType' object has no attribute 'split'

def get_llm(model: str):
    """Retrieve the specified language model based on the model name."""
    env_key = "LLM_MODEL_CONFIG_" + model
    env_value = os.environ.get(env_key)
elif "ollama" in model:
        model_name, base_url = env_value.split(",")
        llm = ChatOllama(base_url=base_url, model=model_name)

This error because env _ key is misconfigured, causing env _ value is null,See the code above.

backend->.env

LLM_MODEL_CONFIG_ollama_llama3="llama3.1, http://localhost:11434"

be revised as

LLM_MODEL_CONFIG_ollama="llama3.1, http://localhost:11434"

zhanglt avatar Sep 27 '24 02:09 zhanglt

Hi @trasgoverde are you able to solve the issue. I'm also facing same issue, with Ollama local server. Documents and Chucks are getting created but not entities(nodes and relationships). Status is showing failed but entities are Documents and Chunks are getting created.

Hello @naveen-420, I still have the same problem as you, seems something about naming but I wasn´t able to fix it. May be @aashipandya can help with the issue. Best,

I think this error is due to the configuration entry backend -->.env image

Hi @zhanglt, if you are able to generate entities(nodes and relationships) with config change and ollama3.1 running locally, can you please share your complete config and ollama3.1 model version you are using. as @trasgoverde mentioned i'm also not able to generate any nodes and relationships with config change.Thanks

@naveen-420 Yes, it all looks normal backend.log ollama.log image

zhanglt avatar Sep 27 '24 03:09 zhanglt

@zhanglt config would be LLM_MODEL_CONFIG_ollama_llama3="llama3,http://localhost:11434/v1"

aashipandya avatar Sep 27 '24 08:09 aashipandya

Hey guys @zhanglt @aashipandya it's not easy to follow here because the last comments are telling opposite suggestions.

As the others I am also experiencing the Ollama issue (llama 3.1). The changes made in my attempts :

I tried the change in the .env file (root folder)

LLM_MODEL_CONFIG_ollama="llama3.1,http://192.168.178.76:11434"

instead of LLM_MODEL_CONFIG_ollama_llama3. I also changed the dropdown to VITE_LLM_MODELS="diffbot,openai-gpt-3.5,openai-gpt-4o,ollama" as seeing in @zhanglt , but make it or not does not change the result.

the problem is in the extraction :

2024-10-11 10:08:14,158 - File Failed in extraction: {'message': 'Failed To Process File:Steve_Jobs or LLM Unable To Parse Content ', 'error_message': "'tail_type'", 'file_name': 'Steve_Jobs', 'status': 'Failed', 'db_url': 'neo4j+s://343561dd.databases.neo4j.io:7687', 'failed_count': 1, 'source_type': 'Wikipedia', 'source_url': None, 'wiki_query': 'Steve_Jobs', 'logging_time': '2024-10-11 10:08:14 UTC'}
Traceback (most recent call last):
  File "/code/score.py", line 190, in extract_knowledge_graph_from_file
    result = await asyncio.to_thread(
  File "/usr/local/lib/python3.10/asyncio/threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/code/src/main.py", line 255, in extract_graph_from_file_Wikipedia
    return processing_source(uri, userName, password, database, model, file_name, pages, allowedNodes, allowedRelationship)
  File "/code/src/main.py", line 335, in processing_source
    node_count,rel_count = processing_chunks(selected_chunks,graph,uri, userName, password, database,file_name,model,allowedNodes,allowedRelationship,node_count, rel_count)
  File "/code/src/main.py", line 410, in processing_chunks
    graph_documents =  get_graph_from_llm(model, chunkId_chunkDoc_list, allowedNodes, allowedRelationship)
  File "/code/src/llm.py", line 192, in get_graph_from_llm
    graph_document_list = get_graph_document_list(
  File "/code/src/llm.py", line 172, in get_graph_document_list
    graph_document = future.result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 795, in convert_to_graph_documents
    return [self.process_response(document, config) for document in documents]
  File "/usr/local/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 795, in <listcomp>
    return [self.process_response(document, config) for document in documents]
  File "/usr/local/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 748, in process_response
    nodes_set.add((rel["tail"], rel["tail_type"]))

I am not sure you guys saw that even the docker-compose has an entry for Ollama - LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-} but even in this case a variation to - LLM_MODEL_CONFIG_ollama=${LLM_MODEL_CONFIG_ollama-} brings any result..

Any chance for some final light ?

IzzyHibbert avatar Oct 11 '24 10:10 IzzyHibbert

Please try to take the latest pull form main branch For runnig locally config is - LLM_MODEL_CONFIG_ollama_llama3="llama2,http://localhost:11434"

For docker build - LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434}

aashipandya avatar Nov 12 '24 07:11 aashipandya

Please try to take the latest pull form main branch For runnig locally config is - LLM_MODEL_CONFIG_ollama_llama3="llama2,http://localhost:11434"

For docker build - LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434}

I'm having issues with Ollama showing up in the LLM model selection.

I've installed Ollama image to docker and it's running on port 11434.

My config amendments: Docker YAML: LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434} Build args: VITE_LLM_MODELS=${VITE_LLM_MODELS-diffbot,openai-gpt-3.5,openai-gpt-4o,llama3}

.env file: LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434" VITE_LLM_MODELS="diffbot,openai-gpt-4o,llama3"

Are there any issues which is preventing Ollama showing up?

richdatasci avatar Dec 02 '24 10:12 richdatasci

Hi @richdatasci in the frontend env what you have mentioned for VITE_LLM_MODELS??

kartikpersistent avatar Dec 02 '24 10:12 kartikpersistent

Hi @richdatasci in the frontend env what you have mentioned for VITE_LLM_MODELS??

I have it in the optional front end variables in the .env file in root. I'll try it in the frontend .env file too.

richdatasci avatar Dec 02 '24 10:12 richdatasci

Hi @richdatasci in the frontend env what you have mentioned for VITE_LLM_MODELS??

I've tried quoting "llama3" in the frontend env and that gives the option for Llama3. However when prompting it gives the error:

Failed To Process File:crewAI or LLM Unable To Parse Content 'NoneType' object has no attribute 'split'

richdatasci avatar Dec 02 '24 13:12 richdatasci

Hi @richdatasci it is related backend llm configuration can you mention backend env variables?

kartikpersistent avatar Dec 02 '24 14:12 kartikpersistent

Hi @richdatasci it is related backend llm configuration can you mention backend env variables?

I've got this in my backend .env: LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"

richdatasci avatar Dec 02 '24 14:12 richdatasci

Any update on this? It still doesn't work.

richdatasci avatar Dec 07 '24 23:12 richdatasci

Hi @richdatasci this may help you #678

kartikpersistent avatar Dec 12 '24 04:12 kartikpersistent

In addition to any configuration issues, this may be related to https://github.com/langchain-ai/langchain-experimental/issues/38.

pbrady avatar Jan 31 '25 18:01 pbrady

Hi @richdatasci , could you solve this issu ? i am facing same problema

I saw this configuration for the backend .env file: LLM_MODEL_CONFIG_ollama_llama3="llama3.1, http://localhost:11434/" but it was later revised to: LLM_MODEL_CONFIG_ollama="llama3.1, http://localhost:11434/"

This left me a bit confused about how to properly configure the .env files for both the backend and frontend.

When I check the models installed on my local machine using ollama list, I get: C:\Users\DataSpurs>ollama list NAME ID SIZE MODIFIED llama3:latest 365c0bd3c000 4.7 GB 2 days ago

So my question is: which format should I use in the .env file?

LLM_MODEL_CONFIG_ollama_llama3="llama3:latest, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama="llama3:latest, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama_llama3="llama3.1, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama="llama3.1, http://localhost:11434/"

I’d really appreciate your clarification on this. Thanks in advance!

damiannqn90 avatar Apr 11 '25 07:04 damiannqn90

Hi @richdatasci , could you solve this issu ? i am facing same problema

I saw this configuration for the backend .env file: LLM_MODEL_CONFIG_ollama_llama3="llama3.1, http://localhost:11434/" but it was later revised to: LLM_MODEL_CONFIG_ollama="llama3.1, http://localhost:11434/"

This left me a bit confused about how to properly configure the .env files for both the backend and frontend.

When I check the models installed on my local machine using ollama list, I get: C:\Users\DataSpurs>ollama list NAME ID SIZE MODIFIED llama3:latest 365c0bd3c000 4.7 GB 2 days ago

So my question is: which format should I use in the .env file?

LLM_MODEL_CONFIG_ollama_llama3="llama3:latest, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama="llama3:latest, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama_llama3="llama3.1, http://localhost:11434/"

LLM_MODEL_CONFIG_ollama="llama3.1, http://localhost:11434/"

I’d really appreciate your clarification on this. Thanks in advance!

Hey Damian,

Yes it is resolved.

The backend env & root env I have: LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"

In the front end .env I also put: VITE_LLM_MODELS="llama3"

Below is the contents of my docker-compose.yml:

version: "3"

services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    volumes:
      - ./backend:/code
    environment:
      - NEO4J_URI=${NEO4J_URI-neo4j://database:7687}
      - NEO4J_PASSWORD=${NEO4J_PASSWORD-password}
      - NEO4J_USERNAME=${NEO4J_USERNAME-neo4j}
      - OPENAI_API_KEY=${OPENAI_API_KEY-}
      - DIFFBOT_API_KEY=${DIFFBOT_API_KEY-}
      - EMBEDDING_MODEL=${EMBEDDING_MODEL-all-MiniLM-L6-v2}
      - LANGCHAIN_ENDPOINT=${LANGCHAIN_ENDPOINT-}
      - LANGCHAIN_TRACING_V2=${LANGCHAIN_TRACING_V2-}
      - LANGCHAIN_PROJECT=${LANGCHAIN_PROJECT-}
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY-}
      - KNN_MIN_SCORE=${KNN_MIN_SCORE-0.94}
      - IS_EMBEDDING=${IS_EMBEDDING-true}
      - GEMINI_ENABLED=${GEMINI_ENABLED-False}
      - GCP_LOG_METRICS_ENABLED=${GCP_LOG_METRICS_ENABLED-False}
      - UPDATE_GRAPH_CHUNKS_PROCESSED=${UPDATE_GRAPH_CHUNKS_PROCESSED-20}
      - NUMBER_OF_CHUNKS_TO_COMBINE=${NUMBER_OF_CHUNKS_TO_COMBINE-6}
      - ENTITY_EMBEDDING=${ENTITY_EMBEDDING-False}
      - GCS_FILE_CACHE=${GCS_FILE_CACHE-False}
#      - LLM_MODEL_CONFIG_anthropic_claude_35_sonnet=${LLM_MODEL_CONFIG_anthropic_claude_35_sonnet-}
#      - LLM_MODEL_CONFIG_fireworks_llama_v3_70b=${LLM_MODEL_CONFIG_fireworks_llama_v3_70b-}
#      - LLM_MODEL_CONFIG_azure_ai_gpt_4o=${LLM_MODEL_CONFIG_azure_ai_gpt_4o-}
#      - LLM_MODEL_CONFIG_azure_ai_gpt_35=${LLM_MODEL_CONFIG_azure_ai_gpt_35-}
#      - LLM_MODEL_CONFIG_groq_llama3_70b=${LLM_MODEL_CONFIG_groq_llama3_70b-}
#      - LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet=${LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet-}
#     - LLM_MODEL_CONFIG_fireworks_qwen_72b=${LLM_MODEL_CONFIG_fireworks_qwen_72b-}
      - LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434}
    env_file:
       - ./backend/.env
    container_name: backend
    extra_hosts:
      - host.docker.internal:host-gateway
    ports:
      - "8000:8000"
    networks:
      - net

  frontend:
    depends_on:
      - backend
    build:
      context: ./frontend
      dockerfile: Dockerfile
      args:
        - VITE_BACKEND_API_URL=${VITE_BACKEND_API_URL-http://localhost:8000}
        - VITE_REACT_APP_SOURCES=${VITE_REACT_APP_SOURCES-local,youtube,wiki,s3}
        - VITE_LLM_MODELS=${VITE_LLM_MODELS-diffbot,openai-gpt-3.5,openai-gpt-4o,llama3}
        - VITE_GOOGLE_CLIENT_ID=${VITE_GOOGLE_CLIENT_ID-""}
        - VITE_BLOOM_URL=${VITE_BLOOM_URL-https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true}
        - VITE_TIME_PER_PAGE=${VITE_TIME_PER_PAGE-50}
        - VITE_CHUNK_SIZE=${VITE_CHUNK_SIZE-5242880}
        - VITE_LARGE_FILE_SIZE=${VITE_LARGE_FILE_SIZE-5242880}
        - VITE_ENV=${VITE_ENV-DEV}
        - VITE_CHAT_MODES=${VITE_CHAT_MODES-""}
        - VITE_BATCH_SIZE=${VITE_BATCH_SIZE-2}
    volumes:
      - ./frontend:/app
      - /app/node_modules
    env_file:
       - ./frontend/.env
    container_name: frontend
    ports:
      - "8080:8080"
    networks:
      - net

networks:
  net:

It works perfectly now, I have also set up a local Neo4jdb too so the entire thing is hosted in docker with a custom frontend.

richdatasci avatar Apr 11 '25 07:04 richdatasci

Hey @richdatasci , thanks for your message and for sharing your setup.

I’ve tried using the same configuration you mentioned:

In backend and root .env: LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"

In frontend .env: VITE_LLM_MODELS="llama3"

However, I’m still having issues. The error I’m getting is:

Exception: Environment variable 'LLM_MODEL_CONFIG_llama3' is not defined as per format or missing

This makes me wonder if maybe the variable name should follow a different pattern. I’ve tested changing the frontend config to: VITE_LLM_MODELS="ollama_llama3" and tried various combinations, but I keep getting the same error.

The strange thing is that it does create the chunks, but it doesn’t generate the entities.

I’ve also tried with the model running both in Docker and directly on my local machine, but the result is always the same.

Has this happened to you at any point? Do you have any suggestions?

Thanks in advance!

damiannqn90 avatar Apr 11 '25 08:04 damiannqn90

Hi @damiannqn90 and @richdatasci I have just tried the application with ollama llama3 model here is the configuration LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434} and I have not mentioned anything in VITE_LLM_MODELS=${VITE_LLM_MODELS-} all models will get rendered in the dropdown you can choose ollama llama3 and click on extract

Image

kartikpersistent avatar Apr 11 '25 15:04 kartikpersistent

Hi @kartikpersistent, thank you very much for your response.

In my case, I’m not running the app using Docker. I have a local LLM set up with Ollama, and I’m running the backend and frontend separately — using yarn run dev for the frontend and uvicorn score:app --reload for the backend.

Given this setup, what .env configuration should I use instead of the following? In this case, I assume I don't need to specify any OpenAI API key, right?

LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-llama3,http://host.docker.internal:11434} VITE_LLM_MODELS=${VITE_LLM_MODELS-}

Below is the information about my local LLM models:

C:\Users\DataSpurs>ollama list NAME ID SIZE MODIFIED llama3:latest 365c0bd3c000 4.7 GB 4 days ago deepseek-r1:7b 0a8c26691023 4.7 GB 7 days ago

Thanks in advance for your help!

damiannqn90 avatar Apr 15 '25 10:04 damiannqn90