log in the queue that was not sent to kafka will be lost.
Hi, I'm using kafka appender. I have a question because there is an issue during use.
If you look at the currently used kafkaAppender, the following code was supposed to be executed at the end.
@Override
public void stop() {
super.stop();
if (lazyProducer != null && lazyProducer.isInitialized()) {
try {
lazyProducer.get().close();
} catch (KafkaException e) {
this.addWarn("Failed to shut down kafka producer: " + e.getMessage(), e);
}
lazyProducer = null;
}
}
With the above code, it seems that the producer will block until it is completely transmitted to kafka, but the actual producer's data on the queue before transmission can be lost, right?
Currently, several logs are randomly lost at the end of the application. Log loss does not occur when the application is terminated after a certain time delay to Thread.Sleep.
The version currently used is logback-kafka-appender-0.1.0.jar, but the upper version does not seem to have changed much.
How can I solve this problem? Can't we make them wait until we consume all the logs accumulated in the queue?