kafkajs
kafkajs copied to clipboard
Callling Consumer.Disconnect() will not work if the consumer is attempting to reconnect
Describe the bug If a user starts a connects to a consumer, then one of the kafka clusters goes down and the automatic retries are in place, then tries to disconnect, the disconnect will get ignored by the setTimeout, which will attempt to reconnect.
To Reproduce Start a cluster with 3 kafka nodes. Open a consumer to the cluster. Kill 1 or 2 nodes from the cluster, wait for the automatically reconnection attempts to occur. In code call consumer.disconnect(). The consumer will continue to try and reconnect. Leaving resources open that should not be.
Expected behavior A library user should be able to call consumer.disconnect() and expect that the library will not continually try and reconnect.
Observed behavior The library will continue to reconnect
Environment:
- OS: Ubuntu
- KafkaJS version: 1.15.0
- Kafka version: confluentinc/cp-kafka:latest (as of July 09 2021) (any version should do)
- NodeJS version: 10.20.1 (any version should do)
Additional context None
We are also hitting this issue, here is a simplified scenario:
- The consumer is trying reconnect for some reason (might be due to anything, broker changes etc.) (1)
- We start a new deployment and SIGINT is sent to the pod.
- Application calls
consumer.disconnet()
due to SIGINT and tries to do a cleanup.- Also all database connections are cleaned up after calling
consumer.disconnect()
in the application.
- Also all database connections are cleaned up after calling
- Consumer manages to reconnect to the broker and starts consuming messages, due to (1).
- Because all resources are cleaned up, the application fails to process messages.
More or less the same situation as samueltuckey explained. I thought this issue is overlooked and wanted to give a friendly nudge about it, @tulios. The solution in #1148 seems simple and works for me.
Hey @isamert Are you using the latest version of kafkajs?
@samueltuckey Now I realized we are a bit behind on the version. Do you have this problem on the latest version? I will try to upgrade today and see if it works.