First few messages getting dropped
My Kafka Appender is as follows:
<appender name="fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
<topic>intake-app-log</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<producerConfig>bootstrap.servers=localhost:9092</producerConfig>
<producerConfig>acks=0</producerConfig>
<!--<producerConfig>linger.ms=1000</producerConfig>-->
<!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
<producerConfig>max.block.ms=0</producerConfig>
<!--<producerConfig>batch.size=0</producerConfig>-->
<!--<producerConfig>buffer.memory=43554432</producerConfig>-->
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
<appender-ref ref="FILE" />
</appender>
My initial log messages are getting dropped, I guess because of the lazy loading strategy of the producer -> https://github.com/danielwegener/logback-kafka-appender/issues/53#issuecomment-346704533
I don't want these logs to go to the fallback appender. Also, I cannot increase max.block.ms > 0 as that blocks the application which is not acceptable for my use case. The workaround that I am using right now is when the application starts I do a dummy log and then sleep the thread for 500ms as follows :
logger.info("sacrifice me");
Thread.sleep(500);
This hack solved the problem for now. I wanted to know if there is any way of disabling the lazy loading strategy and if I am missing any option here?
Thanks
increase max.block.ms helped fix dropped messages