ragflow icon indicating copy to clipboard operation
ragflow copied to clipboard

About docker container startup issues

Open Nuclear6 opened this issue 9 months ago • 7 comments

Describe your problem

I built it in the 172 network segment of the company's office network. Docker must specify a network that does not start with 172 to allow others to log in remotely. I have now adjusted the docker0 network to 192.168.1.0/24, and the IP address is 192.168.1.5. The following modifications have been made to the network part of the docker compose configuration file: networks: ragflow: driver: bridge ipam: driver:default config: - subnet: 192.168.2.0/24 gateway: 192.168.2.5 What should I do if the es connection location is changed to ip connection or the connection failure is reported? By the way, do we have any non-docker setup tutorials?

ragflow-es-01 | {"@timestamp":"2024-04-28T08:48:04.189Z", "log.level": "INFO", "message":"publish_address {192.168.2.1:9200}, bound_addresses {0.0.0.0:9200}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"YOVbw6YARBKUTXteWKmVGQ","elasticsearch.node.id":"d0lfqhkVSB6rzphct-hxbQ","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"rag_flow"} ragflow-server | [WARNING] [2024-04-28 16:50:18,448] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.1:9200)> has failed for 1 times in a row, putting on 1 second timeout ragflow-server | [WARNING] [2024-04-28 16:50:18,448] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.1:9200)> has failed for 1 times in a row, putting on 1 second timeout ragflow-server | [WARNING] [2024-04-28 16:50:18,448] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.1:9200)> has failed for 1 times in a row, putting on 1 second timeout ragflow-server | [WARNING] [2024-04-28 16:52:29,520] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.1:9200)> has failed for 1 times in a row, putting on 1 second timeout

$ curl http://192.168.2.2:9200 { "name" : "es01", "cluster_name" : "rag_flow", "cluster_uuid" : "YOVbw6YARBKUTXteWKmVGQ", "version" : { "number" : "8.11.3", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "64cf052f3b56b1fd4449f5454cb88aca7e739d9a", "build_date" : "2023-12-08T11:33:53.634979452Z", "build_snapshot" : false, "lucene_version" : "9.8.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }

Nuclear6 avatar Apr 28 '24 11:04 Nuclear6

Get into the container and find the right ES IP, then, configure it in service_conf.ymal This is my suggestion.

KevinHuSh avatar Apr 29 '24 00:04 KevinHuSh

service conf es hosts info: redis: db: 1 password: 'infini_rag_flow' host: 'redis:6379' es: hosts: 'http://192.168.2.2:9200'

ragflow-es-01 | {"@timestamp":"2024-04-29T02:31:11.637Z", "log.level": "INFO", "message":"publish_address {192.168.2.2:9200}, bound_addresses {0.0.0.0:9200}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"YOVbw6YARBKUTXteWKmVGQ","elasticsearch.node.id":"d0lfqhkVSB6rzphct-hxbQ","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"rag_flow"}

docker container network info: docker network inspect docker_ragflow [ { "Name": "docker_ragflow", "Id": "6689bad133888000f9c42ca606be3e12af00baefe156d61bce76250f59accd8d", "Created": "2024-04-29T10:24:25.556925934+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "192.168.2.0/24", "Gateway": "192.168.2.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "5b437cad59ef14b74a271ec844bf0cdfaf795ece02bf36d2095ce102e96c6406": { "Name": "ragflow-server", "EndpointID": "aec07d233a37742938864baa3ac078b81ed11110bd1c5d0760f8fc5894524309", "MacAddress": "02:42:c0:a8:02:05", "IPv4Address": "192.168.2.5/24", "IPv6Address": "" }, "6b7e2187fc51bdbb1e0930b9262bf256f369304b4e9ef097ce651b0320127e1a": { "Name": "ragflow-mysql", "EndpointID": "5672456ec744df4554744ead96f55baa0148cfc276ec858c94b17488c8a2d012", "MacAddress": "02:42:c0:a8:02:03", "IPv4Address": "192.168.2.3/24", "IPv6Address": "" }, "ce5083ce6b9f17b4125ee6918a1552762732978a97488506ddede27b1ee15b10": { "Name": "ragflow-es-01", "EndpointID": "27fcf1ec0660c5e790ffa742ce5d9255be05be0890ea3e995e4b8149412c7e3b", "MacAddress": "02:42:c0:a8:02:02", "IPv4Address": "192.168.2.2/24", "IPv6Address": "" }, "ea50263d6052840b8159c3adcc8218461697c46defe82df2d07b7396b6dd4d45": { "Name": "ragflow-minio", "EndpointID": "4dbb8c6fb3457fd03218c27ad25b86c6ba713371c9a2310c401ead2ff74c7542", "MacAddress": "02:42:c0:a8:02:04", "IPv4Address": "192.168.2.4/24", "IPv6Address": "" } }, "Options": {}, "Labels": { "com.docker.compose.network": "ragflow", "com.docker.compose.project": "docker", "com.docker.compose.version": "2.26.1" } } ]

es01改为对应的ip地址为什么还是会出现连接失败呢? ragflow-server | [WARNING] [2024-04-29 10:33:25,456] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.2:9200)> has failed for 1 times in a row, putting on 1 second timeout ragflow-server | [WARNING] [2024-04-29 10:33:25,456] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.2:9200)> has failed for 1 times in a row, putting on 1 second timeout ragflow-server | [WARNING] [2024-04-29 10:33:25,460] [_node_pool.mark_dead] [line:249]: Node <Urllib3HttpNode(http://192.168.2.2:9200)> has failed for 1 times in a row, putting on 1 second timeout

curl can succeed: curl http://localhost:1200 { "name" : "es01", "cluster_name" : "rag_flow", "cluster_uuid" : "YOVbw6YARBKUTXteWKmVGQ", "version" : { "number" : "8.11.3", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "64cf052f3b56b1fd4449f5454cb88aca7e739d9a", "build_date" : "2023-12-08T11:33:53.634979452Z", "build_snapshot" : false, "lucene_version" : "9.8.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" } curl http://192.168.2.2:9200 { "name" : "es01", "cluster_name" : "rag_flow", "cluster_uuid" : "YOVbw6YARBKUTXteWKmVGQ", "version" : { "number" : "8.11.3", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "64cf052f3b56b1fd4449f5454cb88aca7e739d9a", "build_date" : "2023-12-08T11:33:53.634979452Z", "build_snapshot" : false, "lucene_version" : "9.8.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }

docker ps info: image

Nuclear6 avatar Apr 29 '24 02:04 Nuclear6

To add, my host network is an office network starting with 172. My docker service needs to change the IP to the 192 network segment to avoid the problem that other users cannot log in remotely if docker uses the 172 network segment.

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 ether 02:42:8c:59:cc:9d txqueuelen 0 (Ethernet) RX packets 139528 bytes 8376513 (8.3 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2221 bytes 279985 (279.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Nuclear6 avatar Apr 29 '24 02:04 Nuclear6

Did you solve it? I had the same problem

lanricheng avatar Apr 29 '24 09:04 lanricheng

Did you solve it? I had the same problem

No. I feel like they just use docker, and they may not know much about the underlying layer.

Nuclear6 avatar Apr 29 '24 11:04 Nuclear6

'http://192.168.2.2:9200/' This is the address outside of the docker container, I guess. In docker envirement, it has it's own network. Good luck!

KevinHuSh avatar Apr 30 '24 01:04 KevinHuSh

Change the machine so that docker is in the default network segment 172 and everything will be normal. image

Nuclear6 avatar Apr 30 '24 09:04 Nuclear6