thingsboard-ce-k8s
thingsboard-ce-k8s copied to clipboard
tb-node kafka topics being reset every minute
Description
The tb-node setatefulset pod log keeps printing that topicId changed. It prints this for every id every 60 seconds~. From my understanding this happens when a topic is being recreated with the same id. Is this expected bahaviour?
Steps to Reproduce
- Create completely empty cluster
- Create empty demo thingsboard postgresql database
- kubectl apply -f thirdparty.yml
- kubectl apply -f tb-services.yml
- When initialized, tb-node starts to post these messages every 60 seconds.
Expected Behavior
Should not keep printing these messages for the same kafka topics every 60 seconds.
Actual Behavior
The messages look like this
....
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.1-0 to 0 since the associated topicId changed from null to fwMO8nYKRQGjNtbuUtspfQ
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.5-0 to 0 since the associated topicId changed from null to YMBDSCF2R7SrrgxblL5p7A
2023-07-18 11:40:47,193 [kafka-consumer-stats-6-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-stats-loader-client, groupId=consumer-stats-loader-client-group] Resetting the last seen epoch of partition tb_core.7-0 to 0 since the associated topicId changed from null to zR53ZGknRoWhibRlgVM3cQ
....
Full tb-node log: tb-node-0-log.txt
Environment
- Operating System: Ubuntu server 22.04
- Kubernetes : k3s cluster v1.27.3+k3s1
- tb-node version: 3.4.2
- kafka version: wurstmeister/kafka:2.13-2.8.1 (i had the same problem with wurstmeister/kafka:2.12-2.2.1 version)
- Cluster nodes: 1 master node, 3 worker nodes
Additional Information
Transport nodes don't seem to have the same behaviour at the start, but sometimes also start spamming these kind of messages for some topics after a couple of weeks or months of deployment.
It seems like it's possible to control how often this information is printed via this tb-node statefulset config:
- name: TB_QUEUE_KAFKA_CONSUMER_STATS_MIN_PRINT_INTERVAL_MS
value: "600000"
Though it's still not clear if the topicid metada changing in consumer is because of some bug in the logger, or if it's a real problem with the kafka setup.