dependency failed to start: container genai-stack-database-1 exited (1)
How to solve the following problem?
OS:Win10 After docker compose up, I got these:
Attaching to api-1, bot-1, database-1, front-end-1, loader-1, pdf_bot-1, pull-model-1 database-1 | database-1 | Folder /data is not accessible for user: 7474 or group 7474. This is commonly a file permissions issue on the mounted folder. database-1 | database-1 | Hints to solve the issue: database-1 | 1) Make sure the folder exists before mounting it. Docker will create the folder using root permissions before starting the Neo4j container. The root permissions disallow Neo4j from writing to the mounted folder. database-1 | 2) Pass the folder owner's user ID and group ID to docker run, so that docker runs as that user. database-1 | If the folder is owned by the current user, this can be done by adding this flag to your docker run command: database-1 | --user=$(id -u):$(id -g) database-1 | pull-model-1 | pulling ollama model llama2 using http://host.docker.internal:11434 database-1 exited with code 1 Gracefully stopping... (press Ctrl+C again to force) dependency failed to start: container genai-stack-database-1 exited (1)
.env file I used: #*****************************************************************
LLM and Embedding Model
#***************************************************************** LLM=qwen2.5:3b #or any Ollama model tag, gpt-4 (o or turbo), gpt-3.5, or any bedrock model EMBEDDING_MODEL=mxbai-embed-large:latest #or google-genai-embedding-001 openai, ollama, or aws
#*****************************************************************
Neo4j
#***************************************************************** NEO4J_URI=neo4j://database:7687 NEO4J_USERNAME=neo4j NEO4J_PASSWORD=12345678
#*****************************************************************
Langchain
#*****************************************************************
Optional for enabling Langchain Smith API
LANGCHAIN_TRACING_V2=true # false LANGCHAIN_ENDPOINT="https://api.smith.langchain.com" LANGCHAIN_PROJECT="" #your-project-name LANGCHAIN_API_KEY="" #your-api-key ls_...
#*****************************************************************
Ollama
#***************************************************************** OLLAMA_BASE_URL=http://host.docker.internal:11434
It seems that .env has no effect, ollama model llama2 was pulled instead of qwen2.5:3b
getting same issue in my project setup as well
To resolve this issue open docker-compose.yaml file then edit database service by changing the volumes section as below
volumes:
- ./data:/data
To resolve this issue open docker-compose.yaml file then edit database service by changing the volumes section as below
volumes: - ./data:/data
Here this solution didn't work
commenting out line 33 of the docker-compose.yml fixed this for me:
# user: neo4j:neo4j
It seems that .env has no effect, ollama model llama2 was pulled instead of qwen2.5:3b
i had to change lines 27, 108, 147, and 188 of the docker-compose.yml to reference the correct LLM, in my case llama3.2:
LLM=${LLM-llama3.2}
edit: and also in chains.py:38
base_url=config["ollama_base_url"], model="llama3.2"