[Bug]: ragflow-server can't connect to es01
Self Checks
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (Language Policy).
- [x] Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) (Language Policy).
- [x] Please do not modify this template :) and fill in all the required fields.
RAGFlow workspace code commit ID
d0eda83
RAGFlow image version
v0.17.2
Other environment information
Actual behavior
The ragflow-server cant connect to es01。
ragflow-server logs:
`2025-03-17 20:40:26,733 INFO 19 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
2025-03-17 20:40:34,396 INFO 19 init database on cluster mode successfully
2025-03-17 20:40:47,892 INFO 19
____ ___ ______ ______ __
/ __ \ / | / // // / _ __
/ // // /| | / / __ / / / // __ | | /| / /
/ , // ___ |/ // // __/ / // // /| |/ |/ /
// ||// ||_/// // _/ |/|__/
2025-03-17 20:40:47,892 INFO 19 RAGFlow version: v0.17.2 full 2025-03-17 20:40:47,892 INFO 19 project base: /ragflow 2025-03-17 20:40:47,893 INFO 19 Current configs, from /ragflow/conf/service_conf.yaml: ragflow: {'host': '0.0.0.0', 'http_port': 9380} mysql: {'name': 'rag_flow', 'user': 'root', 'password': '', 'host': 'mysql', 'port': 3306, 'max_connections': 100, 'stale_timeout': 30} minio: {'user': 'rag_flow', 'password': '', 'host': 'minio:9000'} es: {'hosts': 'http://es01:9200', 'username': 'elastic', 'password': ''} infinity: {'uri': 'infinity:23817', 'db_name': 'default_db'} redis: {'db': 1, 'password': '', 'host': 'redis:6379'} 2025-03-17 20:40:47,893 INFO 19 Use Elasticsearch http://es01:9200 as the doc engine. 2025-03-17 20:42:58,620 INFO 19 GET http://es01:9200/ [status:N/A duration:130.725s] 2025-03-17 20:42:58,620 WARNING 19 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout 2025-03-17 20:42:58,620 WARNING 19 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy. 2025-03-17 20:43:56,648 INFO 16 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}`
task_executor logs: `2025-03-17 20:41:02,483 INFO 34 task_consumer_0 log path: /ragflow/logs/task_consumer_0.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'} 2025-03-17 20:41:02,484 INFO 34
/_ /_ / / / / _____ _______ / / _____
/ / / __ `/ / /// / __/ | |// _ / / / / / __/ __ / /
/ / / // ( ) ,< / /> </ / // // / // // / /
// _,///|| /_____//||_/_/_,/_/___//
2025-03-17 20:41:02,485 INFO 34 TaskExecutor: RAGFlow version: v0.17.2 full 2025-03-17 20:41:02,485 INFO 34 Use Elasticsearch http://es01:9200 as the doc engine. 2025-03-17 20:43:12,955 INFO 34 GET http://es01:9200/ [status:N/A duration:130.469s] 2025-03-17 20:43:12,956 WARNING 34 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout 2025-03-17 20:43:12,956 WARNING 34 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy. 2025-03-17 20:44:35,868 INFO 29 task_consumer_0 log path: /ragflow/logs/task_consumer_0.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'} 2025-03-17 20:44:35,869 INFO 29
/_ /_ / / / / _____ _______ / / _____
/ / / __ `/ / /// / __/ | |// _ / / / / / __/ __ / /
/ / / // ( ) ,< / /> </ / // // / // // / /
// _,///|| /_____//||_/_/_,/_/___//
2025-03-17 20:44:35,869 INFO 29 TaskExecutor: RAGFlow version: v0.17.2 full 2025-03-17 20:44:35,870 INFO 29 Use Elasticsearch http://es01:9200 as the doc engine. 2025-03-17 20:46:45,948 INFO 29 GET http://es01:9200/ [status:N/A duration:130.077s] 2025-03-17 20:46:45,948 WARNING 29 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout 2025-03-17 20:46:45,948 WARNING 29 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy. 2025-03-17 20:49:01,116 INFO 29 GET http://es01:9200/ [status:N/A duration:130.161s] 2025-03-17 20:49:01,116 WARNING 29 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout 2025-03-17 20:49:01,116 WARNING 29 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy. 2025-03-17 20:49:06,122 INFO 29 Resurrected node <Urllib3HttpNode(http://es01:9200)> (force=False) 2025-03-17 20:51:16,284 INFO 29 HEAD http://es01:9200/ [status:N/A duration:130.161s] 2025-03-17 20:51:16,284 WARNING 29 Node <Urllib3HttpNode(http://es01:9200)> has failed for 2 times in a row, putting on 2 second timeout 2025-03-17 20:51:16,284 ERROR 29 Elasticsearch http://es01:9200 is unhealthy in 120s. 2025-03-17 20:51:51,267 INFO 540 task_consumer_0 log path: /ragflow/logs/task_consumer_0.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'} 2025-03-17 20:51:51,268 INFO 540 `
running docker exec ragflow-server curl -v http://es01:9200 , output
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 172.18.0.2:9200... 0 0 0 0 0 0 0 0 --:--:-- 0:01:07 --:--:-- 0
Expected behavior
No response
Steps to reproduce
follows readme,output different when run' docker logs -f ragflow-server '。
Additional information
No response
sorry for my poor description
Upload your service_conf.yaml file
ragflow:
host: ${RAGFLOW_HOST:-0.0.0.0}
http_port: 9380
mysql:
name: '${MYSQL_DBNAME:-rag_flow}'
user: '${MYSQL_USER:-root}'
password: '${MYSQL_PASSWORD:-infini_rag_flow}'
host: '${MYSQL_HOST:-mysql}'
port: 3306
max_connections: 100
stale_timeout: 30
minio:
user: '${MINIO_USER:-rag_flow}'
password: '${MINIO_PASSWORD:-infini_rag_flow}'
host: '${MINIO_HOST:-minio}:9000'
es:
hosts: 'http://${ES_HOST:-es01}:9200'
username: '${ES_USER:-elastic}'
password: '${ELASTIC_PASSWORD:-infini_rag_flow}'
infinity:
uri: '${INFINITY_HOST:-infinity}:23817'
db_name: 'default_db'
redis:
db: 1
password: '${REDIS_PASSWORD:-infini_rag_flow}'
host: '${REDIS_HOST:-redis}:6379'
Upload your service_conf.yaml file
Connecting to es01 on the host works fine with port forwarding.
docker network inspect docker-ragflow
[
{
"Name": "docker_ragflow",
"Id": "e207edb9eef3bcce66b5b5c39af917884604ff988e1a3b16b763b081daf73778",
"Created": "2025-03-18T09:32:46.480052903+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"1aea9499d778b951becc3d145e1a986a260aaf981685f3cc828b424d41154318": {
"Name": "ragflow-server",
"EndpointID": "dfb0ae74a412239189235a1d1a3835d0701b38ce4482bf54f9e3430ae6299b66",
"MacAddress": "02:42:ac:15:00:06",
"IPv4Address": "172.21.0.6/16",
"IPv6Address": ""
},
"2067e8aa9a758418da0f1c26beeedc51dca5b961d3ef987af0cdb4e301656bba": {
"Name": "ragflow-es-01",
"EndpointID": "26674f687af592412b250a222330a66894851b6e4b5573589989cb6ab998da2f",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
},
"60f409647042f284153b04891c05ff69b29e2dd75111db5a1ccf1d8ed73bf239": {
"Name": "ragflow-mysql",
"EndpointID": "c040f5fd958c67b858af03ee73c4d56c733e731d41a2d7cdb79e8d3ae3a829e0",
"MacAddress": "02:42:ac:15:00:05",
"IPv4Address": "172.21.0.5/16",
"IPv6Address": ""
},
"bde70249bb80c1662ba60e2f177237083d88f389529c8731c84866a2a6ea2661": {
"Name": "ragflow-minio",
"EndpointID": "21db1a31610e745069934e2496e18a1a6fa91e6cdc2b4f10042c38170bda3e85",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
},
"ce58fea01fed1ae6a4b64da18b3932cf2da36bcd76610c930110b6b384efb8b6": {
"Name": "ragflow-redis",
"EndpointID": "2dcb0da3a2bd12c9579138b3b5ddbdce44e70aed690868a5296792770d0953fd",
"MacAddress": "02:42:ac:15:00:04",
"IPv4Address": "172.21.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "ragflow",
"com.docker.compose.project": "docker",
"com.docker.compose.version": "2.17.2"
}
}
]
Confirmed:
- containers are on the same network, network type is bridge
- Container name can be correctly resolved to ip
- The service is listening on 0.0.0.0.
- Container test, the corresponding port of the container can be accessed
- The firewall is not activated on the host computer. How to solve the network connection timeout between containers?
Sorry, it doesn't seem to be any problem. I tried to modify the elasticsearch.yml on ES01 by adding the xpack.security option and then used curl to access http://es01:9200 to check the feedback, but it didn't achieve the expected result.
docker-compose-base.yaml:
services:
es01:
container_name: ragflow-es-01
profiles:
- elasticsearch
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
env_file: .env
environment:
- node.name=es01
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=false
- discovery.type=single-node
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: ["CMD-SHELL", "curl http://localhost:9200"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
infinity:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.0-dev3
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml
command: ["-f", "/infinity_conf.toml"]
ports:
- ${INFINITY_THRIFT_PORT}:23817
- ${INFINITY_HTTP_PORT}:23820
- ${INFINITY_PSQL_PORT}:5432
env_file: .env
environment:
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
nofile:
soft: 500000
hard: 500000
networks:
- ragflow
healthcheck:
test: ["CMD", "curl", "http://localhost:23820/admin/node/current"]
interval: 10s
timeout: 10s
retries: 120
restart: on-failure
mysql:
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
env_file: .env
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- TZ=${TIMEZONE}
command:
--max_connections=1000
--character-set-server=utf8mb4
--collation-server=utf8mb4_unicode_ci
--default-authentication-plugin=mysql_native_password
--tls_version="TLSv1.2,TLSv1.3"
--init-file /data/application/init.sql
ports:
- ${MYSQL_PORT}:3306
volumes:
- mysql_data:/var/lib/mysql
- ./init.sql:/data/application/init.sql
networks:
- ragflow
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-p${MYSQL_PASSWORD}"]
interval: 10s
timeout: 10s
retries: 3
restart: on-failure
minio:
image: quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z
container_name: ragflow-minio
command: server --console-address ":9001" /data
ports:
- ${MINIO_PORT}:9000
- ${MINIO_CONSOLE_PORT}:9001
env_file: .env
environment:
- MINIO_ROOT_USER=${MINIO_USER}
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD}
- TZ=${TIMEZONE}
volumes:
- minio_data:/data
networks:
- ragflow
restart: on-failure
redis:
# swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/valkey/valkey:8
image: valkey/valkey:8
container_name: ragflow-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
env_file: .env
ports:
- ${REDIS_PORT}:6379
volumes:
- redis_data:/data
networks:
- ragflow
restart: on-failure
volumes:
esdata01:
driver: local
infinity_data:
driver: local
mysql_data:
driver: local
minio_data:
driver: local
redis_data:
driver: local
networks:
ragflow:
driver: bridge
attachable: true
driver_opts:
com.docker.network.bridge.enable_icc: "true"
docker-compose.yaml:
# include:
# - ./docker-compose-base.yml
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 80:80
- 443:443
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
compose command:
docker compose -f docker-compose-base.yaml -f docker compose.yaml up -d
my docker don't support include
Anyone can help me? Whatever I do , the containers cannot communicate with each other. The SYN sent by ragflow-server can be detected in the bridge, but there is no response. Container es01 can access port 9200 and receive a json, so do the host machine.
sudo iptables -L -n -v --line-numbers
output:
Chain INPUT (policy ACCEPT 52318 packets, 77M bytes)
num pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 0 0 DOCKER-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 42648 packets, 146M bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.6 tcp dpt:9380
2 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.6 tcp dpt:443
3 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.6 tcp dpt:80
4 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.4 tcp dpt:9001
5 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.5 tcp dpt:3306
6 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.4 tcp dpt:9000
7 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.3 tcp dpt:6379
8 0 0 ACCEPT tcp -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 172.18.0.2 tcp dpt:9200
9 0 0 DROP all -- !docker0 docker0 0.0.0.0/0 0.0.0.0/0
10 0 0 DROP all -- !br-f16fd3b954eb br-f16fd3b954eb 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-BRIDGE (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DOCKER all -- * br-f16fd3b954eb 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-CT (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
2 0 0 ACCEPT all -- * br-f16fd3b954eb 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
Chain DOCKER-FORWARD (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-CT all -- * * 0.0.0.0/0 0.0.0.0/0
2 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
3 0 0 DOCKER-BRIDGE all -- * * 0.0.0.0/0 0.0.0.0/0
4 0 0 ACCEPT all -- docker0 * 0.0.0.0/0 0.0.0.0/0
5 0 0 ACCEPT all -- br-f16fd3b954eb * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DOCKER-ISOLATION-STAGE-2 all -- br-f16fd3b954eb !br-f16fd3b954eb 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * br-f16fd3b954eb 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 575K 830M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
sudo tcpdump -i f16fd3b954eb port 9200 -nnv
output:
tcpdump: listening on br-f16fd3b954eb, link-type EN10MB (Ethernet), capture size 262144 bytes
20:23:00.539668 IP (tos 0x0, ttl 64, id 45181, offset 0, flags [DF], proto TCP (6), length 60)
172.18.0.6.36374 > 172.18.0.2.9200: Flags [S], cksum 0x585b (incorrect -> 0x1b6c), seq 353195989, win 64240, options [mss 1460,sackOK,TS val 2040414447 ecr 0,nop,wscale 7], length 0
20:23:40.361826 IP (tos 0x0, ttl 64, id 57092, offset 0, flags [DF], proto TCP (6), length 60)
172.18.0.6.36438 > 172.18.0.2.9200: Flags [S], cksum 0x585b (incorrect -> 0x1310), seq 1414694172, win 64240, options [mss 1460,sackOK,TS val 2040454270 ecr 0,nop,wscale 7], length 0
20:23:41.371671 IP (tos 0x0, ttl 64, id 57093, offset 0, flags [DF], proto TCP (6), length 60)
172.18.0.6.36438 > 172.18.0.2.9200: Flags [S], cksum 0x585b (incorrect -> 0x0f1f), seq 1414694172, win 64240, options [mss 1460,sackOK,TS val 2040455279 ecr 0,nop,wscale 7], length 0
20:23:43.387655 IP (tos 0x0, ttl 64, id 57094, offset 0, flags [DF], proto TCP (6), length 60)
172.18.0.6.36438 > 172.18.0.2.9200: Flags [S], cksum 0x585b (incorrect -> 0x073f), seq 1414694172, win 64240, options [mss 1460,sackOK,TS val 2040457295 ecr 0,nop,wscale 7], length 0
I meet the same problem. es01 containers can't communicate with ragflow-server containers. I'm using ragflow v0.17.3. My .env file:
# The type of doc engine to use.
# Available options:
# - `elasticsearch` (default)
# - `infinity` (https://github.com/infiniflow/infinity)
DOC_ENGINE=${DOC_ENGINE:-elasticsearch}
# ------------------------------
# docker env var for specifying vector db type at startup
# (based on the vector db type, the corresponding docker
# compose profile will be used)
# ------------------------------
COMPOSE_PROFILES=${DOC_ENGINE}
# The version of Elasticsearch.
STACK_VERSION=8.11.3
# The hostname where the Elasticsearch service is exposed
ES_HOST=es01
# The port used to expose the Elasticsearch service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
ES_PORT=1200
# The password for Elasticsearch.
ELASTIC_PASSWORD=infini_rag_flow
# The port used to expose the Kibana service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
KIBANA_PORT=6601
KIBANA_USER=rag_flow
KIBANA_PASSWORD=infini_rag_flow
# The maximum amount of the memory, in bytes, that a specific Docker container can use while running.
# Update it according to the available memory in the host machine.
MEM_LIMIT=8073741824
# The hostname where the Infinity service is exposed
INFINITY_HOST=infinity
# Port to expose Infinity API to the host
INFINITY_THRIFT_PORT=23817
INFINITY_HTTP_PORT=23820
INFINITY_PSQL_PORT=5432
# The password for MySQL.
MYSQL_PASSWORD=infini_rag_flow
# The hostname where the MySQL service is exposed
MYSQL_HOST=mysql
# The database of the MySQL service to use
MYSQL_DBNAME=rag_flow
# The port used to expose the MySQL service to the host machine,
# allowing EXTERNAL access to the MySQL database running inside the Docker container.
MYSQL_PORT=5455
# The hostname where the MinIO service is exposed
MINIO_HOST=minio
# The port used to expose the MinIO console interface to the host machine,
# allowing EXTERNAL access to the web-based console running inside the Docker container.
MINIO_CONSOLE_PORT=9001
# The port used to expose the MinIO API service to the host machine,
# allowing EXTERNAL access to the MinIO object storage service running inside the Docker container.
MINIO_PORT=9000
# The username for MinIO.
# When updated, you must revise the `minio.user` entry in service_conf.yaml accordingly.
MINIO_USER=rag_flow
# The password for MinIO.
# When updated, you must revise the `minio.password` entry in service_conf.yaml accordingly.
MINIO_PASSWORD=infini_rag_flow
# The hostname where the Redis service is exposed
REDIS_HOST=redis
# The port used to expose the Redis service to the host machine,
# allowing EXTERNAL access to the Redis service running inside the Docker container.
REDIS_PORT=6379
# The password for Redis.
REDIS_PASSWORD=infini_rag_flow
# The port used to expose RAGFlow's HTTP API service to the host machine,
# allowing EXTERNAL access to the service running inside the Docker container.
SVR_HTTP_PORT=9380
# The RAGFlow Docker image to download.
# Defaults to the v0.17.0-slim edition, which is the RAGFlow Docker image without embedding models.
# RAGFLOW_IMAGE=infiniflow/ragflow:v0.17.0-slim
#
# To download the RAGFlow Docker image with embedding models, uncomment the following line instead:
RAGFLOW_IMAGE=infiniflow/ragflow:v0.17.0
#
# The Docker image of the v0.17.0 edition includes:
# - Built-in embedding models:
# - BAAI/bge-large-zh-v1.5
# - BAAI/bge-reranker-v2-m3
# - maidalun1020/bce-embedding-base_v1
# - maidalun1020/bce-reranker-base_v1
# - Embedding models that will be downloaded once you select them in the RAGFlow UI:
# - BAAI/bge-base-en-v1.5
# - BAAI/bge-large-en-v1.5
# - BAAI/bge-small-en-v1.5
# - BAAI/bge-small-zh-v1.5
# - jinaai/jina-embeddings-v2-base-en
# - jinaai/jina-embeddings-v2-small-en
# - nomic-ai/nomic-embed-text-v1.5
# - sentence-transformers/all-MiniLM-L6-v2
#
#
# If you cannot download the RAGFlow Docker image:
#
# - For the `nightly-slim` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly-slim
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly-slim
#
# - For the `nightly` edition, uncomment either of the following:
# RAGFLOW_IMAGE=swr.cn-north-4.myhuaweicloud.com/infiniflow/ragflow:nightly
# RAGFLOW_IMAGE=registry.cn-hangzhou.aliyuncs.com/infiniflow/ragflow:nightly
# The local time zone.
TIMEZONE='Asia/Shanghai'
# Uncomment the following line if you have limited access to huggingface.co:
# HF_ENDPOINT=https://hf-mirror.com
# Optimizations for MacOS
# Uncomment the following line if your OS is MacOS:
# MACOS=1
# The maximum file size for each uploaded file, in bytes.
# You can uncomment this line and update the value if you wish to change the 128M file size limit
# MAX_CONTENT_LENGTH=134217728
# After making the change, ensure you update `client_max_body_size` in nginx/nginx.conf correspondingly.
# The log level for the RAGFlow's owned packages and imported packages.
# Available level:
# - `DEBUG`
# - `INFO` (default)
# - `WARNING`
# - `ERROR`
# For example, following line changes the log level of `ragflow.es_conn` to `DEBUG`:
# LOG_LEVELS=ragflow.es_conn=DEBUG
# aliyun OSS configuration
# STORAGE_IMPL=OSS
# ACCESS_KEY=xxx
# SECRET_KEY=eee
# ENDPOINT=http://oss-cn-hangzhou.aliyuncs.com
# REGION=cn-hangzhou
# BUCKET=ragflow65536
My docker-compose.yml:
include:
- ./docker-compose-base.yml
services:
ragflow:
depends_on:
mysql:
condition: service_healthy
image: ${RAGFLOW_IMAGE}
container_name: ragflow-server
ports:
- ${SVR_HTTP_PORT}:9380
- 11180:80
- 1443:443
volumes:
- ./ragflow-logs:/ragflow/logs
- ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
env_file: .env
environment:
- TZ=${TIMEZONE}
- HF_ENDPOINT=${HF_ENDPOINT}
- MACOS=${MACOS}
networks:
- ragflow
restart: on-failure
# https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
extra_hosts:
- "host.docker.internal:host-gateway"
# executor:
# depends_on:
# mysql:
# condition: service_healthy
# image: ${RAGFLOW_IMAGE}
# container_name: ragflow-executor
# volumes:
# - ./ragflow-logs:/ragflow/logs
# - ./nginx/ragflow.conf:/etc/nginx/conf.d/ragflow.conf
# env_file: .env
# environment:
# - TZ=${TIMEZONE}
# - HF_ENDPOINT=${HF_ENDPOINT}
# - MACOS=${MACOS}
# entrypoint: "/ragflow/entrypoint_task_executor.sh 1 3"
# networks:
# - ragflow
# restart: on-failure
# # https://docs.docker.com/engine/daemon/prometheus/#create-a-prometheus-configuration
# # If you're using Docker Desktop, the --add-host flag is optional. This flag makes sure that the host's internal IP gets exposed to the Prometheus container.
# extra_hosts:
# - "host.docker.internal:host-gateway"
My docker-compose-base.yml:
services:
es01:
container_name: ragflow-es-01
hostname: es01
profiles:
- elasticsearch
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
env_file: .env
environment:
- node.name=es01
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=false
- discovery.type=single-node
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: ["CMD-SHELL", "curl http://localhost:9200"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
infinity:
container_name: ragflow-infinity
profiles:
- infinity
image: infiniflow/infinity:v0.6.0-dev3
volumes:
- infinity_data:/var/infinity
- ./infinity_conf.toml:/infinity_conf.toml
command: ["-f", "/infinity_conf.toml"]
ports:
- ${INFINITY_THRIFT_PORT}:23817
- ${INFINITY_HTTP_PORT}:23820
- ${INFINITY_PSQL_PORT}:5432
env_file: .env
environment:
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
nofile:
soft: 500000
hard: 500000
networks:
- ragflow
healthcheck:
test: ["CMD", "curl", "http://localhost:23820/admin/node/current"]
interval: 10s
timeout: 10s
retries: 120
restart: on-failure
mysql:
# mysql:5.7 linux/arm64 image is unavailable.
image: mysql:8.0.39
container_name: ragflow-mysql
env_file: .env
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- TZ=${TIMEZONE}
command:
--max_connections=1000
--character-set-server=utf8mb4
--collation-server=utf8mb4_unicode_ci
--default-authentication-plugin=mysql_native_password
--tls_version="TLSv1.2,TLSv1.3"
--init-file /data/application/init.sql
ports:
- ${MYSQL_PORT}:3306
volumes:
- mysql_data:/var/lib/mysql
- ./init.sql:/data/application/init.sql
networks:
- ragflow
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-p${MYSQL_PASSWORD}"]
interval: 10s
timeout: 10s
retries: 3
restart: on-failure
minio:
image: quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z
container_name: ragflow-minio
command: server --console-address ":9001" /data
ports:
- ${MINIO_PORT}:9000
- ${MINIO_CONSOLE_PORT}:9001
env_file: .env
environment:
- MINIO_ROOT_USER=${MINIO_USER}
- MINIO_ROOT_PASSWORD=${MINIO_PASSWORD}
- TZ=${TIMEZONE}
volumes:
- minio_data:/data
networks:
- ragflow
restart: on-failure
redis:
# swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/valkey/valkey:8
image: valkey/valkey:8
container_name: ragflow-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
env_file: .env
ports:
- ${REDIS_PORT}:6379
volumes:
- redis_data:/data
networks:
- ragflow
restart: on-failure
volumes:
esdata01:
driver: local
infinity_data:
driver: local
mysql_data:
driver: local
minio_data:
driver: local
redis_data:
driver: local
networks:
ragflow:
driver: bridge
service_conf.yaml:
ragflow:
host: ${RAGFLOW_HOST:-0.0.0.0}
http_port: 9380
mysql:
name: '${MYSQL_DBNAME:-rag_flow}'
user: '${MYSQL_USER:-root}'
password: '${MYSQL_PASSWORD:-infini_rag_flow}'
host: '${MYSQL_HOST:-mysql}'
port: 3306
max_connections: 100
stale_timeout: 30
minio:
user: '${MINIO_USER:-rag_flow}'
password: '${MINIO_PASSWORD:-infini_rag_flow}'
host: '${MINIO_HOST:-minio}:9000'
es:
hosts: 'http://${ES_HOST:-es01}:9200'
username: '${ES_USER:-elastic}'
password: '${ELASTIC_PASSWORD:-infini_rag_flow}'
infinity:
uri: '${INFINITY_HOST:-infinity}:23817'
db_name: 'default_db'
redis:
db: 1
password: '${REDIS_PASSWORD:-infini_rag_flow}'
host: '${REDIS_HOST:-redis}:6379'
Logs from all container:
root@SciServer:~/xxx/ragflow-main/docker# docker logs ragflow-es-01 --tail 10
gflow-server --tail 100
docker logs ragflow-minio --tail 100
docker logs ragflow-redis --tail 100
docker logs ragflow-mysql --tail 100{"@timestamp":"2025-03-25T05:06:16.250Z", "log.level": "INFO", "message":"master node changed {previous [], current [{es01}{WW7WMmm_SUi-Z1AOY3YFtg}{2bqsIuAIRw2lx_FTJY_NhQ}{es01}{172.18.0.2}{172.18.0.2:9300}{cdfhilmrstw}{8.11.3}{7000099-8500003}]}, term: 3, version: 39, reason: Publication{term=3, version=39}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.service.ClusterApplierService","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.275Z", "log.level": "INFO", "message":"starting file watcher ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.common.file.AbstractFileWatchingService","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.279Z", "log.level": "INFO", "message":"file settings service up and running [tid=113]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[file-watcher[/usr/share/elasticsearch/config/operator/settings.json]]","log.logger":"org.elasticsearch.common.file.AbstractFileWatchingService","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.282Z", "log.level": "INFO", "message":"node-join: [{es01}{WW7WMmm_SUi-Z1AOY3YFtg}{2bqsIuAIRw2lx_FTJY_NhQ}{es01}{172.18.0.2}{172.18.0.2:9300}{cdfhilmrstw}{8.11.3}{7000099-8500003}] with reason [completing election]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.coordination.NodeJoinExecutor","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.288Z", "log.level": "INFO", "message":"publish_address {172.18.0.2:9200}, bound_addresses {[::]:9200}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.289Z", "log.level": "INFO", "message":"started {es01}{WW7WMmm_SUi-Z1AOY3YFtg}{2bqsIuAIRw2lx_FTJY_NhQ}{es01}{172.18.0.2}{172.18.0.2:9300}{cdfhilmrstw}{8.11.3}{7000099-8500003}{ml.allocated_processors=64, ml.allocated_processors_double=64.0, ml.max_jvm_size=4039114752, ml.config_version=11.0.0, xpack.installed=true, transform.config_version=10.0.0, ml.machine_memory=8073740288}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.476Z", "log.level": "INFO", "message":"license [f68e3fd3-8337-4dad-bca5-a32bf05f47f8] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.ClusterStateLicenseService","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.477Z", "log.level": "INFO", "message":"license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.security.authc.Realms","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.480Z", "log.level": "INFO", "message":"recovered [0] indices into cluster_state", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.gateway.GatewayService","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2025-03-25T05:06:16.527Z", "log.level": "INFO", "message":"Node [{es01}{WW7WMmm_SUi-Z1AOY3YFtg}] is selected as the current health node.", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][management][T#3]","log.logger":"org.elasticsearch.health.node.selection.HealthNodeTaskExecutor","elasticsearch.cluster.uuid":"_OkwplwHQtKwdE-CPlIuWQ","elasticsearch.node.id":"WW7WMmm_SUi-Z1AOY3YFtg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-cluster"}
root@SciServer:~/xxx/ragflow-main/docker# docker logs ragflow-server --tail 100
2025-03-25 13:06:13,474 INFO 16 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
2025-03-25 13:06:18,793 INFO 16 init database on cluster mode successfully
2025-03-25 13:06:28,615 INFO 16
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
2025-03-25 13:06:28,615 INFO 16 RAGFlow version: v0.17.0 full
2025-03-25 13:06:28,616 INFO 16 project base: /ragflow
2025-03-25 13:06:28,616 INFO 16 Current configs, from /ragflow/conf/service_conf.yaml:
ragflow: {'host': '0.0.0.0', 'http_port': 9380}
mysql: {'name': 'rag_flow', 'user': 'root', 'password': '********', 'host': 'mysql', 'port': 3306, 'max_connections': 100, 'stale_timeout': 30}
minio: {'user': 'rag_flow', 'password': '********', 'host': 'minio:9000'}
es: {'hosts': 'http://es01:9200', 'username': 'elastic', 'password': '********'}
infinity: {'uri': 'infinity:23817', 'db_name': 'default_db'}
redis: {'db': 1, 'password': '********', 'host': 'redis:6379'}
2025-03-25 13:06:28,616 INFO 16 Use Elasticsearch http://es01:9200 as the doc engine.
2025-03-25 13:08:39,056 INFO 16 GET http://es01:9200/ [status:N/A duration:130.439s]
2025-03-25 13:08:39,056 WARNING 16 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
2025-03-25 13:08:39,056 WARNING 16 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
WARNING:ragflow.es_conn:Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
2025-03-25 13:10:54,223 INFO 16 GET http://es01:9200/ [status:N/A duration:130.160s]
2025-03-25 13:10:54,224 WARNING 16 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
2025-03-25 13:10:54,224 WARNING 16 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
2025-03-25 13:10:59,227 INFO 16 Resurrected node <Urllib3HttpNode(http://es01:9200)> (force=False)
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
WARNING:ragflow.es_conn:Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
2025-03-25 13:13:09,391 INFO 16 HEAD http://es01:9200/ [status:N/A duration:130.164s]
2025-03-25 13:13:09,392 WARNING 16 Node <Urllib3HttpNode(http://es01:9200)> has failed for 2 times in a row, putting on 2 second timeout
2025-03-25 13:13:09,392 ERROR 16 Elasticsearch http://es01:9200 is unhealthy in 120s.
Traceback (most recent call last):
File "/ragflow/api/ragflow_server.py", line 78, in <module>
settings.init_settings()
File "/ragflow/api/settings.py", line 121, in init_settings
docStoreConn = rag.utils.es_conn.ESConnection()
File "/ragflow/rag/utils/__init__.py", line 28, in _singleton
instances[key] = cls(*args, **kw)
File "/ragflow/rag/utils/es_conn.py", line 63, in __init__
raise Exception(msg)
Exception: Elasticsearch http://es01:9200 is unhealthy in 120s.
2025-03-25 13:13:11,613 INFO 470 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
2025-03-25 13:13:16,775 INFO 470 init database on cluster mode successfully
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode(http://es01:9200)> has failed for 2 times in a row, putting on 2 second timeout
ERROR:ragflow.es_conn:Elasticsearch http://es01:9200 is unhealthy in 120s.
Traceback (most recent call last):
File "/ragflow/rag/svr/task_executor.py", line 786, in <module>
main()
File "/ragflow/rag/svr/task_executor.py", line 749, in main
settings.init_settings()
File "/ragflow/api/settings.py", line 121, in init_settings
docStoreConn = rag.utils.es_conn.ESConnection()
File "/ragflow/rag/utils/__init__.py", line 28, in _singleton
instances[key] = cls(*args, **kw)
File "/ragflow/rag/utils/es_conn.py", line 63, in __init__
raise Exception(msg)
Exception: Elasticsearch http://es01:9200 is unhealthy in 120s.
2025-03-25 13:13:26,590 INFO 470
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
2025-03-25 13:13:26,590 INFO 470 RAGFlow version: v0.17.0 full
2025-03-25 13:13:26,590 INFO 470 project base: /ragflow
2025-03-25 13:13:26,590 INFO 470 Current configs, from /ragflow/conf/service_conf.yaml:
ragflow: {'host': '0.0.0.0', 'http_port': 9380}
mysql: {'name': 'rag_flow', 'user': 'root', 'password': '********', 'host': 'mysql', 'port': 3306, 'max_connections': 100, 'stale_timeout': 30}
minio: {'user': 'rag_flow', 'password': '********', 'host': 'minio:9000'}
es: {'hosts': 'http://es01:9200', 'username': 'elastic', 'password': '********'}
infinity: {'uri': 'infinity:23817', 'db_name': 'default_db'}
redis: {'db': 1, 'password': '********', 'host': 'redis:6379'}
2025-03-25 13:13:26,590 INFO 470 Use Elasticsearch http://es01:9200 as the doc engine.
2025-03-25 13:15:36,848 INFO 470 GET http://es01:9200/ [status:N/A duration:130.256s]
2025-03-25 13:15:36,848 WARNING 470 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
2025-03-25 13:15:36,849 WARNING 470 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
WARNING:ragflow.es_conn:Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
2025-03-25 13:17:52,015 INFO 470 GET http://es01:9200/ [status:N/A duration:130.160s]
2025-03-25 13:17:52,016 WARNING 470 Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
2025-03-25 13:17:52,016 WARNING 470 Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
2025-03-25 13:17:57,022 INFO 470 Resurrected node <Urllib3HttpNode(http://es01:9200)> (force=False)
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode(http://es01:9200)> has failed for 1 times in a row, putting on 1 second timeout
WARNING:ragflow.es_conn:Connection timed out. Waiting Elasticsearch http://es01:9200 to be healthy.
root@SciServer:~/xxx/ragflow-main/docker# docker logs ragflow-minio --tail 100
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-12-20T01-00-02Z (go1.21.5 linux/amd64)
Status: 1 Online, 0 Offline.
S3-API: http://172.18.0.3:9000 http://127.0.0.1:9000
Console: http://172.18.0.3:9001 http://127.0.0.1:9001
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
root@SciServer:~/xxx/ragflow-main/docker# docker logs ragflow-redis --tail 100
1:C 25 Mar 2025 05:06:01.524 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:C 25 Mar 2025 05:06:01.524 * oO0OoO0OoO0Oo Valkey is starting oO0OoO0OoO0Oo
1:C 25 Mar 2025 05:06:01.524 * Valkey version=8.0.2, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 25 Mar 2025 05:06:01.524 * Configuration loaded
1:M 25 Mar 2025 05:06:01.525 * monotonic clock: POSIX clock_gettime
1:M 25 Mar 2025 05:06:01.525 * Running mode=standalone, port=6379.
1:M 25 Mar 2025 05:06:01.526 * Server initialized
1:M 25 Mar 2025 05:06:01.526 * Loading RDB produced by Valkey version 8.0.2
1:M 25 Mar 2025 05:06:01.526 * RDB age 81065 seconds
1:M 25 Mar 2025 05:06:01.526 * RDB memory usage when created 0.87 Mb
1:M 25 Mar 2025 05:06:01.526 * Done loading RDB, keys loaded: 0, keys expired: 0.
1:M 25 Mar 2025 05:06:01.526 * DB loaded from disk: 0.000 seconds
1:M 25 Mar 2025 05:06:01.526 * Ready to accept connections tcp
root@SciServer:~/xxx/ragflow-main/docker# docker logs ragflow-mysql --tail 100
2025-03-25 13:06:01+08:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.39-1.el9 started.
2025-03-25 13:06:01+08:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2025-03-25 13:06:01+08:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.39-1.el9 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
2025-03-25T05:06:02.161229Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
2025-03-25T05:06:02.162544Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
2025-03-25T05:06:02.162567Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.39) starting as process 1
2025-03-25T05:06:02.167191Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-03-25T05:06:02.314974Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-03-25T05:06:02.495867Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2025-03-25T05:06:02.495899Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2025-03-25T05:06:02.499405Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2025-03-25T05:06:02.514674Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.39' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2025-03-25T05:06:02.514675Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2025-03-25T05:06:11.566506Z 9 [Warning] [MY-013360] [Server] Plugin mysql_native_password reported: ''mysql_native_password' is deprecated and will be removed in a future release. Please use caching_sha2_password instead'
same issue. Just operate as readme.md, no other change to configure file.
same bug,ubuntu20.04
same issue. Just operate as readme.md, no other change to configure file. Ubuntu 24.04.2 LTS
Can anyone help us? I met the same problem on Debian 12.
Can anyone help us? I met the same problem on Debian 12.
I just change the expose ports, and it did work!!!
Same issue. change the expose ports didn't work.
same here , ubuntu22.04, it did not work even changed port
Can anyone help us? I met the same problem on Debian 12.
I just change the expose ports, and it did work!!!
Same error here. And you will have the error with elasticsearch and mysql and any other service that changed the external port and make it different than the internal port. You can still change the external port, but you will need to modify how the ragflow container connects to those services. An easy fix is simply to use the same ports for internal and external connections. So, instead of mapping elastic search from port 9200 to the port 1200, simply map 9200:9200. To do that, the file service_conf.yaml.template on the docker folder should look like this:
ragflow:
host: ${RAGFLOW_HOST:-0.0.0.0}
http_port: 9380
mysql:
name: '${MYSQL_DBNAME:-rag_flow}'
user: '${MYSQL_USER:-root}'
password: '${MYSQL_PASSWORD:-infini_rag_flow}'
host: '${MYSQL_HOST:-mysql}'
port: 3306
max_connections: 100
stale_timeout: 30
minio:
user: '${MINIO_USER:-rag_flow}'
password: '${MINIO_PASSWORD:-infini_rag_flow}'
host: '${MINIO_HOST:-minio}:9000'
es:
hosts: 'http://${ES_HOST:-es01}:9200'
username: '${ES_USER:-elastic}'
password: '${ELASTIC_PASSWORD:-infini_rag_flow}'
infinity:
uri: '${INFINITY_HOST:-infinity}:23817'
db_name: 'default_db'
redis:
db: 1
password: '${REDIS_PASSWORD:-infini_rag_flow}'
host: '${REDIS_HOST:-redis}:6379'
Those ports should match the ports on the docker-compose-base.yml file:
${MYSQL_PORT}:3306
${MINIO_PORT}:9000
${ES_PORT}:9200
${REDIS_PORT}:6379
That should solve the problem.
I encountered the same problem in version 0.18.0, but I found that when I executed the health check curl http://localhost:9200 command in the ragflow-es01 container, an error occurred because the es password was configured in the.env file. So the health check command should be changed to curl -u elastic:infini_rag_flow http://localhost:9200, and that worked
es01:
container_name: ragflow-es-01
profiles:
- elasticsearch
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
env_file: .env
environment:
- node.name=es01
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=false
- discovery.type=single-node
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
- cluster.routing.allocation.disk.watermark.low=5gb
- cluster.routing.allocation.disk.watermark.high=3gb
- cluster.routing.allocation.disk.watermark.flood_stage=2gb
- TZ=${TIMEZONE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: ["CMD-SHELL", "curl -u elastic:infini_rag_flow http://localhost:9200"]
interval: 10s
timeout: 10s
retries: 120
networks:
- ragflow
restart: on-failure
same here. on MacOS. tried almost every proposed solutions in the similar issues.
looking forward to a substantial solution.
check es port in .env and service_conf.yaml.template
same problem on MacOS
same problem on MacOS
Increasing the memory allocated to Docker, and it did work!!!
I'm using podman-compose actrually. It's likely the es01 related command not working. I've finally done this using extra arguments:
docker compose --profile elasticsearch --profile opensearch -f docker-compose-base.yml -f docker-compose.yml up -d