incubator-hugegraph icon indicating copy to clipboard operation
incubator-hugegraph copied to clipboard

集群部署hugegraph1.2.0 ,但是在主节点创建图同步失败,是因为什么原因造成的?

Open chenhong1990 opened this issue 1 year ago • 5 comments

Problem Type (问题类型)

configs (配置项 / 文档相关)

Before submit

  • [X] 我已经确认现有的 IssuesFAQ 中没有相同 / 重复问题 (I have confirmed and searched that there are no similar problems in the historical issue and documents)

Environment (环境信息)

  • Server Version: 1.2.0 (Apache Release Version)
  • Backend: RocksDB

Your Question (问题描述)

集群部署hugegraph1.2.0 ,默认的hugegraph图创建属性可以同步到其他节点,但是在主节点创建图同步失败,是因为什么原因造成的?

Uploading apache-hugegraph-incubating-1.2.0.tar.gz…

Vertex/Edge example (问题点 / 边数据举例)

No response

Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)

No response

chenhong1990 avatar Apr 07 '24 03:04 chenhong1990

Hey @chenhong1990, it seems like you're encountering an issue with graph synchronization in a clustered deployment of HugeGraph 1.2.0. From what you've described, the default graph properties are syncing across nodes, but creating a graph on the master node fails to sync. This could be due to a few reasons:

  1. Configuration Issues: Ensure that all nodes in your cluster are correctly configured to communicate with each other. This includes network configurations, as well as HugeGraph-specific settings in hugegraph.properties. Misconfigurations here could lead to synchronization issues.

  2. Version Compatibility: Double-check that all nodes in your cluster are running the same version of HugeGraph and that they are compatible with any other software components you're using (like your backend database).

  3. Backend Database Configuration: Since you're using RocksDB, make sure that it's correctly set up for clustering. RocksDB itself doesn't handle replication or synchronization, so you'll need an external process or tool to manage this. Ensure that your setup correctly replicates data across the instances backing your HugeGraph nodes.

  4. Logs and Error Messages: Look for any error messages or warnings in the logs of your HugeGraph server, especially around the time you attempt to create a graph. These messages can provide valuable insights into what might be going wrong.

  5. Network Issues: There could be network issues preventing the master node from communicating properly with the other nodes. Check for any firewall rules, network partitioning, or other network-related issues that might be affecting connectivity.

Without more specific error messages or logs, it's challenging to pinpoint the exact cause. I recommend checking the above areas to diagnose the issue further. If you have any error logs or more detailed information, feel free to share them for more targeted assistance.

Remember, the HugeGraph community and documentation are valuable resources that can help guide you through troubleshooting and configuration.

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Apr 07 '24 03:04 dosubot[bot]

@chenhong1990 U could refer #2512 to try the latest distributed version (With multiple replication and data shards)

imbajin avatar Apr 08 '24 08:04 imbajin

refer #1925

z7658329 avatar Apr 08 '24 09:04 z7658329

@chenhong1990 can you upload your configs? i cant download from your Question Desc

z7658329 avatar Apr 09 '24 03:04 z7658329

@z7658329

rest-server配置如下:

rpc server configs for multi graph-servers or raft-servers

rpc.server_host=172.30.96.162 rpc.server_port=8091 #rpc.server_timeout=30

rpc client configs (like enable to keep cache consistency)

rpc.remote_url=172.30.96.162:8091,172.30.96.139:8091,172.30.96.182:8091 rpc.client_connect_timeout=20 rpc.client_reconnect_period=10 rpc.client_read_timeout=40 rpc.client_retries=3 rpc.client_load_balancer=consistentHash

raft group initial peers

raft.group_peers=172.30.96.162:8091,172.30.96.139:8091,172.30.96.182:8091

lightweight load balancing (beta)

server.id=server-1 server.role=master

hugegraph配置如下: backend=rocksdb serializer=binary

store=hugegraph

raft.mode=true raft.path=./raft-log raft.safe_read=true raft.use_replicator_pipeline=true raft.election_timeout=10000 raft.snapshot_interval=3600 raft.backend_threads=48 raft.read_index_threads=8 raft.snapshot_threads=4 raft.snapshot_parallel_compress=false raft.snapshot_compress_threads=4 raft.snapshot_decompress_threads=4 raft.read_strategy=ReadOnlyLeaseBased raft.queue_size=16384 raft.queue_publish_timeout=60 raft.apply_batch=1 raft.rpc_threads=80 raft.rpc_connect_timeout=5000 raft.rpc_timeout=60 raft.install_snapshot_rpc_timeout=36000 raft.endpoint=172.30.96.162:8091 raft.group_peers=172.30.96.162:8091,172.30.96.139:8091,172.30.96.182:8091

search.text_analyzer=jieba search.text_analyzer_mode=INDEX

#rocksdb backend config rocksdb.data_path=/data/hugegraph1.2.0/apache-hugegraph-incubating-1.2.0/data/hugegraph rocksdb.wal_path=/data/hugegraph1.2.0/apache-hugegraph-incubating-1.2.0/wal/hugegraph

chenhong1990 avatar Apr 09 '24 07:04 chenhong1990