dtle icon indicating copy to clipboard operation
dtle copied to clipboard

MySQL-kafka incr stage: stop / kill the dest DTLE or pause the job , the incr data is resent

Open asiroliu opened this issue 2 years ago • 1 comments

Description

MySQL-kafka incr stage: stop / kill the dest DTLE or pause the job , the incr data is resent

Steps to reproduce the issue

  1. prepare data on src MySQL
sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.100.9.3 --mysql-port=3306 --mysql-user=test --mysql-password=test --create_secondary=off --report-interval=10 --time=0 --mysql-db=action_db --tables=1 --table_size=100 prepare
  1. create dtle job
{
  "job_id": "stop_dest_dtle_incr",
  "is_password_encrypted": false,
  "task_step_name": "all",
  "failover": true,
  "retry": 2,
  "src_task": {
    "task_name": "src",
    "node_id": "eac0c3e5-497d-fe19-83f4-dedc5f3312d8",
    "mysql_src_task_config": {
      "gtid": "",
      "binlog_relay": false
    },
    "drop_table_if_exists": true,
    "skip_create_db_table": false,
    "repl_chan_buffer_size": 120,
    "chunk_size": 1,
    "group_max_size": 1,
    "group_timeout": 100,
    "connection_config": {
      "database_type": "MySQL",
      "host": "172.100.9.3",
      "port": 3306,
      "user": "test_src",
      "password": "test_src"
    },
    "replicate_do_db": [
      {
        "table_schema": "action_db",
        "tables": [
          {
            "table_name": "sbtest1"
          }
        ]
      }
    ]
  },
  "dest_task": {
    "task_name": "dest",
    "node_id": "ad298179-cb20-075e-bd7d-f3906bcf378c",
    "parallel_workers": 1,
    "kafka_topic": "dtle",
    "kafka_broker_addrs": [
      "172.100.9.21:9092"
    ]
  }
}
  1. inser data on src MySQL
sysbench /usr/share/sysbench/oltp_insert.lua --mysql-host=172.100.9.3 --mysql-port=3306 --mysql-user=test --mysql-password=test --create_secondary=off --report-interval=10 --time=0 --mysql-db=action_db --tables=1 --table_size=100 --events=1 run
  1. stop dest dtle
systemctl stop dtle-nomad
  1. wait for dtle failover
  2. get kafka incr massage, the Incremental data repeat transfer incr.html.zip

Output of ./dtle version:**

9.9.9.9-master-3cca6a1

asiroliu avatar Apr 27 '22 06:04 asiroliu

MySQL目标端无重复是依靠gtid_executed表。

dtle kafka输出提供gtid,数据去重由kafka consumer负责。

consul进度每15秒更新一次,理论上数据重复量应小于15秒。

ghost avatar Apr 28 '22 07:04 ghost