logback-kafka-appender
logback-kafka-appender copied to clipboard
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Hi , when i had config logback.xml file and changed root level="debug" , the application log a lot of such as "org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. " information and blockeding my application , if i change logback.xml file root level to info , it work fine , why is this ? thanks a lot.
exception log information
lib information
I am also having the same issue. Just that even after switching log level to info messages still don't go inside kafka broker
i am also having the same issue
i am also having the same issue 。。。
i am also having the same issue 。..
change hostname to 0.0.0.0
i am also having the same issue
when I make the following changes. I have the same issue,
@danielwegener Could you tell me why.
Because kafka tries to recursively log to itself which may lead it into a deadlock (which forunately is eventually resolved by the metadata timeout, but still breaks your client).
The ensureDeferredAppends
queues all recursive log entries and delays the actual sending until a non-kafka message is attempted to be logged that "frees" them. However, as soon as you put ALL loggers to debug, kafka internals also try to log debug information - and those are not all captured by stargsWith(KAFKA_LOGGER_PREFIXED)
- and these debug logs are internal so we can not safely assume to catch all of them them while still supporting multiple version of the kafka-client library´
So the solution for now: do not enable global debug logging (rather do it selectively per package). The only really safe solution would be to shadow package the kafka-client with its transitive dependencies and replace its usages of slf4j with an implementation that either never logs to kafka itself or tags all of its messsages as messages that always get queued. But I am not really happy with that solution either (possibly licensing issues, possibly appender releases for each kafka-client release).
@danielwegener ,I see and appreciate your reply,
@danielwegener So you need to update your configuration example form "debug" ===> "INFO".
I have the same problem. when I make the following changes. It work. send a message after super.start() in KafkaAppender.start() function.I don't known why
@Override public void start() { // only error free appenders should be activated if (!checkPrerequisites()) return;
if (partition != null && partition < 0) {
partition = null;
}
lazyProducer = new LazyProducer();
super.start();
```
final byte[] payload = "sssd".getBytes(); final byte[] key = "sdsss".getBytes(); final Long timestamp = System.currentTimeMillis(); final ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topic, partition, timestamp, key, payload); lazyProducer.get().send(record); lazyProducer.get().flush();
}
@magicdogs How do you solve this problem?
@OneYearOldChen update logback.xml file set root level to info ...
add this to your logback-spring.xml
<logger name="org.apache.kafka" level="info"/>
@Birdflying1005 good point :)
Can you guys imagine some doc/faq-entry or something that would have helped you to not run into this issue? I'd be happy to add it to the documentation
can you add spring.kafka.producer.retries=5 and request.timeout.ms=600000
hi,
change the this parameter maxBlockTime
=2000 ms