com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10000, active 0, maxActive 30, creating 0 druid连接池创建一直失败,不经常出现 出现就要重启服务。
2024-11-18 00:36:19.806 [destination =xxxxxx , address = /xxxxx EventParser] ERROR com.alibaba.otter.canal.common.alarm.LogAlarmHandler - destination:dl_dljt_luxi[com.alibaba.otter.canal.parse.exception.CanalParseException: apply failed caused by : nested exception is org.apache.ibatis.exceptions.PersistenceException:
Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10000, active 0, maxActive 30, creating 0
The error may exist in spring/tsdb/sql-map/sqlmap_snapshot.xml
The error may involve com.alibaba.otter.canal.parse.inbound.mysql.tsdb.dao.MetaSnapshotMapper.findByTimestamp
The error occurred while executing a query
Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10000, active 0, maxActive 30, creating 0
Caused by: org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10000, active 0, maxActive 30, creating 0
The error may exist in spring/tsdb/sql-map/sqlmap_snapshot.xml
The error may involve com.alibaba.otter.canal.parse.inbound.mysql.tsdb.dao.MetaSnapshotMapper.findByTimestamp
The error occurred while executing a query
Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 10000, active 0, maxActive 30, creating 0
at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:92)
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440)
at com.sun.proxy.$Proxy11.selectOne(Unknown Source)
at org.mybatis.spring.SqlSessionTemplate.selectOne(SqlSessionTemplate.java:159)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:87)
at org.apache.ibatis.binding.MapperProxy$PlainMethodInvoker.invoke(MapperProxy.java:152)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:85)
at com.sun.proxy.$Proxy12.findByTimestamp(Unknown Source)
at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.dao.MetaSnapshotDAO.findByTimestamp(MetaSnapshotDAO.java:28)
at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.DatabaseTableMeta.buildMemFromSnapshot(DatabaseTableMeta.java:405)
at com.alibaba.otter.canal.parse.inbound.mysql.tsdb.DatabaseTableMeta.rollback(DatabaseTableMeta.java:166)
at com.alibaba.otter.canal.parse.inbound.mysql.AbstractMysqlEventParser.processTableMeta(AbstractMysqlEventParser.java:144)
at com.alibaba.otter.canal.parse.inbound.AbstractEventParser$1.run(AbstractEventParser.java:192)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.ibatis.exceptions.PersistenceException:
每次出现就需要重启canal servr 重启完一般会回复正常。新增instance 也会出现这个问题
################################################# ######### common argument ############# #################################################
tcp bind ip
canal.ip =
register ip to zookeeper
canal.register.ip = canal.port = 11111 canal.metrics.pull.port = 11112
canal instance user/passwd
canal.user = canal
canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458
canal admin config
canal.admin.manager = canal-admin:8089 canal.admin.port = 11110 canal.admin.user = admin canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
admin auto register
#canal.admin.register.auto = true #canal.admin.register.cluster = #canal.admin.register.name =
canal.zkServers = zookeeper-0.zookeeper-headless.common.svc.cluster.local:2181,zookeeper-1.zookeeper-headless.common.svc.cluster.local:2181,zookeeper-2.zookeeper-headless.common.svc.cluster.local:2181
flush data to zk
canal.zookeeper.flush.period = 1000 canal.withoutNetty = false
tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
canal.serverMode = kafka
flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir} canal.file.flush.period = 1000
memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024
meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE canal.instance.memory.rawEntry = true
detecing config
canal.instance.detecting.enable = false #canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now() canal.instance.detecting.sql = select 1 canal.instance.detecting.interval.time = 3 canal.instance.detecting.retry.threshold = 3 canal.instance.detecting.heartbeatHaEnable = false
support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size = 1024
mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60
network config
canal.instance.network.receiveBufferSize = 16384 canal.instance.network.sendBufferSize = 16384 canal.instance.network.soTimeout = 30
binlog filter config
canal.instance.filter.druid.ddl = true canal.instance.filter.query.dcl = false canal.instance.filter.query.dml = false canal.instance.filter.query.ddl = false canal.instance.filter.table.error = false canal.instance.filter.rows = false canal.instance.filter.transaction.entry = false canal.instance.filter.dml.insert = false canal.instance.filter.dml.update = false canal.instance.filter.dml.delete = false
binlog format/image check
canal.instance.binlog.format = ROW canal.instance.binlog.image = FULL
binlog ddl isolation
canal.instance.get.ddl.isolation = false
parallel parser config
canal.instance.parser.parallel = true
concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()
#canal.instance.parser.parallelThreadSize = 16
disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize = 256
table meta tsdb info
canal.instance.tsdb.enable = true canal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL; canal.instance.tsdb.dbUsername = canal canal.instance.tsdb.dbPassword = canal
dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval = 24
purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire = 360
################################################# ######### destinations ############# ################################################# canal.destinations =
conf root dir
canal.conf.dir = ../conf
auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true canal.auto.scan.interval = 5
set this value to 'true' means that when binlog pos not found, skip to latest.
WARN: pls keep 'false' in production env, or if you know what you want.
canal.auto.reset.latest.pos.mode = false
canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml
canal.instance.global.mode = manager canal.instance.global.lazy = false canal.instance.global.manager.address = ${canal.admin.manager} #canal.instance.global.spring.xml = classpath:spring/memory-instance.xml #canal.instance.global.spring.xml = classpath:spring/file-instance.xml canal.instance.global.spring.xml = classpath:spring/default-instance.xml
################################################## ######### MQ Properties ############# ##################################################
aliyun ak/sk , support rds/mq
canal.aliyun.accessKey = canal.aliyun.secretKey = canal.aliyun.uid=
canal.mq.flatMessage = false canal.mq.canalBatchSize = 50 canal.mq.canalGetTimeout = 100
Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel = local
canal.mq.database.hash = true canal.mq.send.thread.size = 30 canal.mq.build.thread.size = 8 canal.mq.properties.security.protocol = SASL_PLAINTEXT canal.mq.properties.sasl.mechanism = PLAIN ################################################## ######### Kafka ############# ################################################## kafka.bootstrap.servers = xxxxxxxxxx kafka.acks = all kafka.compression.type = none kafka.batch.size = 16384 kafka.linger.ms = 1 kafka.max.request.size = 1048576 kafka.buffer.memory = 33554432 kafka.max.in.flight.requests.per.connection = 1 kafka.retries = 0
kafka.kerberos.enable = false #kafka.kerberos.krb5.file = ../conf/kerberos/krb5.conf #kafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf
sasl demo
kafka.sasl.jaas.config=xxxxx kafka.sasl.mechanism = PLAIN kafka.security.protocol = SASL_PLAINTEXT
sasl demo
kafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \n username="alice" \npassword="alice-secret";
kafka.sasl.mechanism = SCRAM-SHA-512
kafka.security.protocol = SASL_PLAINTEXT
#################################################
mysql serverId , v1.0.26+ will autoGen
canal.instance.mysql.slaveId=8114
enable gtid use true/false
canal.instance.gtidon=false
position info
canal.instance.master.address=xxxxxxxx:3916 canal.instance.master.journal.name= canal.instance.master.position= canal.instance.master.timestamp= canal.instance.master.gtid=
rds oss binlog
canal.instance.rds.accesskey= canal.instance.rds.secretkey= canal.instance.rds.instanceId=
table meta tsdb info
canal.instance.tsdb.enable=true #canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb #canal.instance.tsdb.dbUsername=canal #canal.instance.tsdb.dbPassword=canal
#canal.instance.standby.address = #canal.instance.standby.journal.name = #canal.instance.standby.position = #canal.instance.standby.timestamp = #canal.instance.standby.gtid=
username/password
canal.instance.dbUsername=ep_edgedata canal.instance.dbPassword=xxxxxxxx canal.instance.connectionCharset = UTF-8
enable druid Decrypt database password
canal.instance.enableDruid=false #canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
table regex
canal.instance.filter.regex= #canal.instance.filter.regex=huanbao.jxry_station_info
table black regex
#canal.instance.filter.black.regex=
table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch canal.instance.filter.query.ddl=true
canal.mq.topic=example
dynamic topic route by schema or table regex
canal.mq.dynamicTopic=ep_edgedata\..* canal.mq.partition=0
hash partition config
#canal.mq.partitionsNum=3 #canal.mq.partitionHash=test.table:id^name,.\.. #################################################
以上是canal配置和instance配置。数据库权限用root,目前是刚开始没问题,今天在使用的第四天开始出现这个问题。
1.1.7版本,部署在k8s中。我也遇到这个问题,学你这样要重启才能解决
1.1.7版本,部署在k8s中。我也遇到这个问题,学你这样重启才能解决
我也是1.17版本,k8s部署。
你找到原因了吗
你找到原因了吗
没有,再也没出现过
canal.instance.tsdb.enable=false 这样试试
我也遇到同样的问题, 在K8S部署1.1.7版本,一点点尝试, 最终注释这两行成功解决!