logback-kafka-appender icon indicating copy to clipboard operation
logback-kafka-appender copied to clipboard

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Open magicdogs opened this issue 7 years ago • 18 comments

Hi , when i had config logback.xml file and changed root level="debug" , the application log a lot of such as "org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. " information and blockeding my application , if i change logback.xml file root level to info , it work fine , why is this ? thanks a lot.

exception log information

image

lib information

image

magicdogs avatar Mar 17 '17 07:03 magicdogs

I am also having the same issue. Just that even after switching log level to info messages still don't go inside kafka broker

shades198 avatar Mar 21 '17 14:03 shades198

i am also having the same issue

wuming333666 avatar Mar 23 '17 08:03 wuming333666

i am also having the same issue 。。。

zjingchuan avatar Jul 13 '17 03:07 zjingchuan

i am also having the same issue 。..

iDube avatar Nov 01 '17 07:11 iDube

change hostname to 0.0.0.0

xiaods avatar Nov 06 '17 13:11 xiaods

i am also having the same issue

feilongyang avatar Nov 10 '17 06:11 feilongyang

when I make the following changes. I have the same issue, _20180118160426 @danielwegener Could you tell me why.

shikonglaike avatar Jan 18 '18 09:01 shikonglaike

Because kafka tries to recursively log to itself which may lead it into a deadlock (which forunately is eventually resolved by the metadata timeout, but still breaks your client). The ensureDeferredAppends queues all recursive log entries and delays the actual sending until a non-kafka message is attempted to be logged that "frees" them. However, as soon as you put ALL loggers to debug, kafka internals also try to log debug information - and those are not all captured by stargsWith(KAFKA_LOGGER_PREFIXED) - and these debug logs are internal so we can not safely assume to catch all of them them while still supporting multiple version of the kafka-client library´

So the solution for now: do not enable global debug logging (rather do it selectively per package). The only really safe solution would be to shadow package the kafka-client with its transitive dependencies and replace its usages of slf4j with an implementation that either never logs to kafka itself or tags all of its messsages as messages that always get queued. But I am not really happy with that solution either (possibly licensing issues, possibly appender releases for each kafka-client release).

danielwegener avatar Jan 18 '18 11:01 danielwegener

@danielwegener ,I see and appreciate your reply,

shikonglaike avatar Jan 19 '18 00:01 shikonglaike

@danielwegener So you need to update your configuration example form "debug" ===> "INFO".

YouXiang-Wang avatar Mar 11 '18 10:03 YouXiang-Wang

I have the same problem. when I make the following changes. It work. send a message after super.start() in KafkaAppender.start() function.I don't known why

@Override public void start() { // only error free appenders should be activated if (!checkPrerequisites()) return;

    if (partition != null && partition < 0) {
        partition = null;
    }

    lazyProducer = new LazyProducer();

    super.start();

  ```

final byte[] payload = "sssd".getBytes(); final byte[] key = "sdsss".getBytes(); final Long timestamp = System.currentTimeMillis(); final ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topic, partition, timestamp, key, payload); lazyProducer.get().send(record); lazyProducer.get().flush();

    }

zhaojingyang avatar Mar 28 '18 02:03 zhaojingyang

@magicdogs How do you solve this problem?

OneYearOldChen avatar Apr 09 '18 03:04 OneYearOldChen

@OneYearOldChen update logback.xml file set root level to info ...

magicdogs avatar Apr 09 '18 03:04 magicdogs

add this to your logback-spring.xml <logger name="org.apache.kafka" level="info"/>

lichenglin avatar May 21 '18 07:05 lichenglin

@Birdflying1005 good point :)

danielwegener avatar Jun 11 '18 06:06 danielwegener

Can you guys imagine some doc/faq-entry or something that would have helped you to not run into this issue? I'd be happy to add it to the documentation

danielwegener avatar Jun 11 '18 06:06 danielwegener

can you add spring.kafka.producer.retries=5 and request.timeout.ms=600000

madanctc avatar Jan 23 '20 08:01 madanctc

hi, change the this parameter maxBlockTime =2000 ms

omrryldrrm avatar Jan 04 '22 13:01 omrryldrrm