fluency
fluency copied to clipboard
Connection reset by peer errors
Hi @komamitsu ,
I hope you are doing well.
We use the fluency to send logs to td-agent, but we got the following error java.io.IOException: Connection reset by peer
.
Considering the following parameters, can you please advise what is the potential reasons for this errors? Thank you. connectionTimeoutMilli=5000, readTimeoutMilli=5000, waitBeforeCloseMilli=1000
56007051 WARN [pool-1-thread-1] org.komamitsu.fluency.fluentd.ingester.sender.RetryableSender - Sender failed to send data. sender=RetryableSender{baseSender=TCPSender{config=Config{host='xxx', port=24224, connectionTimeoutMilli=5000, readTimeoutMilli=5000, waitBeforeCloseMilli=1000} Config{senderErrorHandler=null}} NetworkSender{config=Config{host='siv-admin-cluster-log.perf.fastretailing.cn', port=24224, connectionTimeoutMilli=5000, readTimeoutMilli=5000, waitBeforeCloseMilli=1000} Config{senderErrorHandler=null}, failureDetector=null} org.komamitsu.fluency.fluentd.ingester.sender.TCPSender@52e919e6, retryStrategy=ExponentialBackOffRetryStrategy{config=Config{baseIntervalMillis=400, maxIntervalMillis=30000} Config{maxRetryCount=7}} RetryStrategy{config=Config{baseIntervalMillis=400, maxIntervalMillis=30000} Config{maxRetryCount=7}}, isClosed=false} org.komamitsu.fluency.fluentd.ingester.sender.RetryableSender@116c37a0, retry=0
java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:182)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:130)
at java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:493)
at java.base/java.nio.channels.SocketChannel.write(SocketChannel.java:507)
at org.komamitsu.fluency.fluentd.ingester.sender.TCPSender.sendBuffers(TCPSender.java:86)
at org.komamitsu.fluency.fluentd.ingester.sender.TCPSender.sendBuffers(TCPSender.java:31)
at org.komamitsu.fluency.fluentd.ingester.sender.NetworkSender.sendInternal(NetworkSender.java:102)
at org.komamitsu.fluency.fluentd.ingester.sender.FluentdSender.sendInternalWithRestoreBufferPositions(FluentdSender.java:74)
at org.komamitsu.fluency.fluentd.ingester.sender.FluentdSender.send(FluentdSender.java:56)
at org.komamitsu.fluency.fluentd.ingester.sender.RetryableSender.sendInternal(RetryableSender.java:77)
at org.komamitsu.fluency.fluentd.ingester.sender.FluentdSender.sendInternalWithRestoreBufferPositions(FluentdSender.java:74)
at org.komamitsu.fluency.fluentd.ingester.sender.FluentdSender.send(FluentdSender.java:56)
at org.komamitsu.fluency.fluentd.ingester.FluentdIngester.ingest(FluentdIngester.java:87)
at org.komamitsu.fluency.buffer.Buffer.flushInternal(Buffer.java:357)
at org.komamitsu.fluency.buffer.Buffer.flush(Buffer.java:112)
at org.komamitsu.fluency.flusher.Flusher.runLoop(Flusher.java:66)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Experiencing the same issue using the library inside Spark stages.
Looks like the server side (Fluentd) sent a TCP RST packet. I recommend you to monitor TCP packets between Fluentd and Fluency to isolate the cause.