amazonriver
amazonriver copied to clipboard
一个表几分钟不更新,第一次更新的数据会丢失
一段时间不更新的表(10分钟以上),第一次更新的数据没有被写入到到kafka中,第二次更新的才会写入kafka,出现问题频率较高
可以贴一下表结构,pg版本,和更新的操作吗?
我之前的描述有问题,不是必现,是出现的概率比较高,下面配置信息里面我配了两个表,测试过都存在这种问题
pg版本: postgresql 9.6.12
表结构: CREATE TABLE db40t.t_mon_kafka ( update_cnt numeric(20,0), update_time timestamp without time zone )
更新脚本: update db40t.t_mon_kafka set update_time = now() ,update_cnt = update_cnt + 1;
配置信息 { "pg_dump_path": "", "subscribes": [{ "dump": false, "slotName": "amazonriver", "pgConnConf": { "host": "192.168.216.87", "port": 6432, "database": "grabdb", "user": "replica", "password": "******" }, "rules": [ { "table": "dbm_monitor_timeliness", "pks": ["table_name"], "topic": "db40.dbm_monitor_timeliness" }, { "table": "t_mon_kafka", "topic": "db40t.t_mon_kafka" } ], "kafkaConf": { "addrs": ["192.168.216.158:9092","192.168.216.157:9092","192.168.200.94:9092"] }, "retry": 0 }], "prometheus_address": ":8084" }
表没有主键吗
t_mon_kafka这个表没有主键,但是另一个表dbm_monitor_timeliness是有主键的,先出现了丢数据的问题;然后我用t_mon_kafka这个表做了一个监控,每半小时更新一次,数据丢失概率在50%以上;
现在,我已经给表加上主键了, 再测试一天时间,结果反馈给你!
t_mon_kafka这个表我加了主键,下午测试3个半小时,每半个小时更新一次数据,一共更新了7次,kafka只接收到4条消息,丢了3条,同时运行你们的tunnel这个工具,没有丢数据,但是tunnel也有问题,运行一段时间报错。 更新时间间隔不长的情况不会丢数据 { "table": "t_mon_kafka", "pks": ["seq"], "topic": "db40t.t_mon_kafka" }
好的,我这里看下
请问是一个BUG吗,短期内会解决吗
请问一下,配置完之后,数据始终无法同步到kafka里面,topic也没有创建,这是怎么回事啊?
请问一下,配置完之后,数据始终无法同步到kafka里面,topic也没有创建,这是怎么回事啊?
有错误日志吗
kafka里面就没有反应
INFO[0000] Initializing new client
INFO[0000] ClientID is the default of 'sarama', you should consider setting it to something application-specific.
INFO[0000] ClientID is the default of 'sarama', you should consider setting it to something application-specific.
INFO[0000] client/metadata fetching metadata for all topics from broker 172.16.0.79:9092
INFO[0000] Connected to broker at 172.16.0.79:9092 (unregistered)
INFO[0000] client/brokers registered new broker #0 at ubuntu:9092
INFO[0000] client/metadata found some partitions to be leaderless
INFO[0000] client/metadata retrying after 250ms... (3 attempts remaining)
INFO[0000] client/metadata fetching metadata for all topics from broker 172.16.0.79:9092
INFO[0000] client/metadata found some partitions to be leaderless
INFO[0000] client/metadata retrying after 250ms... (2 attempts remaining)
INFO[0000] client/metadata fetching metadata for all topics from broker 172.16.0.79:9092
INFO[0000] client/metadata found some partitions to be leaderless
INFO[0000] client/metadata retrying after 250ms... (1 attempts remaining)
INFO[0000] client/metadata fetching metadata for all topics from broker 172.16.0.79:9092
INFO[0000] client/metadata found some partitions to be leaderless
INFO[0000] Successfully initialized new client
INFO[0000] start prometheus handler
INFO[0000] start amazon...
INFO[0000] start stream for slot_for_kafka
DEBU[0000] send heartbeat
INFO[0000] handle wal data: &{BEGIN map[] 1568083179376 200770952
配置文件我看下,是不是表的匹配规则有问题
{ "pg_dump_path": "", "subscribes": [{
"dump": true,
"slotName": "slot_for_kafka",
"pgConnConf": {
"host": "172.16.0.45",
"port": 5432,
"database": "test",
"user": "test_rep",
"password": "123456"
},
"rules": [
{
"table": "user",
"pks": ["id"],
"topic": "user_topic02"
}
],
"kafkaConf": {
"addrs": ["172.16.0.79:9092"]
},
"retry": -1
}],
"prometheus_address": ":8080"
}
你的kafka是否允许自动创建topic?
设置了的auto.create.topics.enable=true,也不行
日志里并没有错误信息,你的kafka版本号是多少
kafka_2.11-2.3.0
应该是kafka go client的问题,我升级一下库的版本
OK,谢谢
用最新的发布,再试下
这个要删除了,重新下载安装吗?
是的
怎么最新版的没有glide文件了?
去掉了glide
用源码安装的话直接go install就行
还是不行。那个是不是这个问题呢:Connected to broker at 127.0.0.1:9092 (unregistered)
能否给个配置文件的样例