KnowStreaming
KnowStreaming copied to clipboard
Docer-Compose 部署ES时,自动增加Shard数
- [x] 我已经在 issues 搜索过相关问题了,并没有重复的。
环境信息
官方docker-compose文档
- Operating System version : centos7
重现该问题的步骤
- 使用docker-compose文档, 修改80端口我为8456
- 运行 docker-compose -f docker-compose.yml up
ps: docker-compose 文件变更内容, 将三处80端口,修改为8456, 其他不变
预期结果
部署成功
实际结果
部署失败, 无法访问页面
页面错误:
如果有异常,请附上异常Trace:
报错1: 部署过程中报错
knowstreaming-manager | 2023-06-28 16:54:33.634 ERROR 12 --- [pool-7-thread-1] c.d.logi.elasticsearch.client.ESClient : ESClient_sendRequest||cluster=elasticsearch||req=ESIndicesPutTemplateRequest||url=/_template/ks_kafka_zookeeper_metric||cost=1
knowstreaming-manager |
knowstreaming-manager | java.net.ConnectException: Connection refused
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
knowstreaming-manager | at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
knowstreaming-manager | at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
knowstreaming-manager | at java.lang.Thread.run(Thread.java:745)
knowstreaming-manager |
knowstreaming-manager | 2023-06-28 16:54:33.646 ERROR 12 --- [pool-7-thread-1] c.d.logi.elasticsearch.client.ESClient : ESClient_sendRequest||cluster=elasticsearch||req=ESIndicesPutIndexRequest||url=/ks_kafka_zookeeper_metric_2023-06-28||cost=2
knowstreaming-manager |
knowstreaming-manager | java.net.ConnectException: Connection refused
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
knowstreaming-manager | at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
knowstreaming-manager | at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
knowstreaming-manager | at java.lang.Thread.run(Thread.java:745)
knowstreaming-manager |
knowstreaming-manager | 2023-06-28 16:54:33.671 ERROR 12 --- [pool-9-thread-1] c.d.logi.elasticsearch.client.ESClient : ESClient_sendRequest||cluster=elasticsearch||req=ESIndicesPutIndexRequest||url=/ks_kafka_zookeeper_metric_2023-06-27||cost=5
knowstreaming-manager |
knowstreaming-manager | java.net.ConnectException: Connection refused
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
knowstreaming-manager | at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:174)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:148)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:351)
knowstreaming-manager | at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:221)
knowstreaming-manager | at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
knowstreaming-manager | at java.lang.Thread.run(Thread.java:745)
报错2: 部署过程, 出现 knowstreaming-init | ElasticSearch Initialize Success knowstreaming-init exited with code 0 之后, 过几分钟报错
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:32:17,774+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_zookeeper_metric] for index patterns [ks_kafka_zookeeper_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
knowstreaming-init | ElasticSearch Start Initialize
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,211+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_broker_metric] for index patterns [ks_kafka_broker_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,264+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_cluster_metric] for index patterns [ks_kafka_cluster_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,322+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_group_metric] for index patterns [ks_kafka_group_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,368+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_partition_metric] for index patterns [ks_kafka_partition_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,419+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_replication_metric] for index patterns [ks_kafka_partition_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,469+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_topic_metric] for index patterns [ks_kafka_topic_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
elasticsearch-single | {"type": "server", "timestamp": "2023-06-28T16:33:16,511+08:00", "level": "INFO", "component": "o.e.c.m.MetaDataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "716102d4c3df", "message": "adding template [ks_kafka_zookeeper_metric] for index patterns [ks_kafka_zookeeper_metric*]", "cluster.uuid": "E2DE74KfRtK0G38ZEpOFmA", "node.id": "8NF-Lsf1TZOgSkT6taZp-w" }
knowstreaming-init | ElasticSearch Initialize Success
knowstreaming-init exited with code 0
knowstreaming-manager | 2023-06-28 16:35:00.006 INFO 12 --- [taskScheduler-1] c.x.k.s.k.c.s.CollectThreadPoolService : JobThreadPoolInfo shardId:0 queueSize:0 physicalClusterIdList: shardId:1 queueSize:0 physicalClusterIdList: shardId:2 queueSize:0 physicalClusterIdList: ...
knowstreaming-manager | 2023-06-28 16:38:00.021 ERROR 12 --- [O dispatcher 13] c.d.logi.elasticsearch.client.ESClient : ESClient_sendRequest||cluster=docker-cluster||req=ESIndicesPutIndexRequest||url=/ks_kafka_broker_metric_2023-06-28||cost=5
knowstreaming-manager |
knowstreaming-manager | org.elasticsearch.client.ResponseException: method [PUT], host [http://172.18.0.2:9200], URI [/ks_kafka_broker_metric_2023-06-28], status line [HTTP/1.1 400 Bad Request]
knowstreaming-manager | {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"},"status":400}
knowstreaming-manager | at org.elasticsearch.client.RestClient$1.completed(RestClient.java:548)
knowstreaming-manager | at org.elasticsearch.client.RestClient$1.completed(RestClient.java:533)
knowstreaming-manager | at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
knowstreaming-manager | at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
knowstreaming-manager | at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
knowstreaming-manager | at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
knowstreaming-manager | at org.apache.http.impl.nio.client.InternalRequestExecutor.inputReady(InternalRequestExecutor.java:83)
knowstreaming-manager | at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
knowstreaming-manager | at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
knowstreaming-manager | at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
knowstreaming-manager | at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
knowstreaming-manager | at java.lang.Thread.run(Thread.java:745)
knowstreaming-manager |
knowstreaming-manager | 2023-06-28 16:38:00.023 ERROR 12 --- [O dispatcher 17] c.d.logi.elasticsearch.client.ESClient : ESClient_sendRequest||cluster=docker-cluster||req=ESIndicesPutIndexRequest||url=/ks_kafka_zookeeper_metric_2023-06-28||cost=5
knowstreaming-manager |
knowstreaming-manager | org.elasticsearch.client.ResponseException: method [PUT], host [http://172.18.0.2:9200], URI [/ks_kafka_zookeeper_metric_2023-06-28], status line [HTTP/1.1 400 Bad Request]
knowstreaming-manager | {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"},"status":400}
knowstreaming-manager | at org.elasticsearch.client.RestClient$1.completed(RestClient.java:548)
knowstreaming-manager | at org.elasticsearch.client.RestClient$1.completed(RestClient.java:533)
knowstreaming-manager | at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)```
Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;
ES的 shard 不够
Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;ES的 shard 不够
请问这个问题能通过修改docker-compose.yaml 配置文件来修复吗?
Validation Failed: 1: this action would add [20] total shards, but this cluster currently has [1000]/[1000] maximum shards open;ES的 shard 不够
请问这个问题能通过修改docker-compose.yaml 配置文件来修复吗?
可以的,老哥有兴趣也欢迎贡献这块的PR。