Burrow icon indicating copy to clipboard operation
Burrow copied to clipboard

InvalidReceiveException: Invalid receive (size = 369295616 larger than 524288)

Open shubhamvasaikar opened this issue 6 years ago • 6 comments

I have a single broker Kafka and Zookeeper. I have configured it to use GSSAPI with PLAIN. I am getting the following warning in Kafka logs when I start Burrow.

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295616 larger than 524288)
	at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
	at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
	at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:248)
	at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:460)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
	at kafka.network.Processor.poll(SocketServer.scala:535)
	at kafka.network.Processor.run(SocketServer.scala:452)
	at java.lang.Thread.run(Thread.java:748)

I am also getting this error in burrow.log

{"level":"info","ts":1533065406.9009063,"msg":"starting evaluations","type":"coordinator","name":"notifier"}
{"level":"error","ts":1533065407.5871475,"msg":"failed to start client","type":"module","coordinator":"cluster","class":"kafka","name":"local","error":"kafka: client has run out of available brokers to talk to (Is your cluster reachable?)"}

I have the following configuration in my Burrow config:

[general]
pidfile="burrow.pid"
stdout-logfile="burrow.out"
access-control-allow-origin="*"

[logging]
filename="logs/burrow.log"
level="info"
maxsize=100
maxbackups=30
maxage=10
use-localtime=true
use-compression=false

[zookeeper]
servers=[ "ak.example.com:2181" ]
timeout=6
root-path="/opt/kafka/data"

[client-profile.test]
client-id="burrow-test"
kafka-version="1.0.0"
sasl="mysasl"
tls="mytls"

[tls.mytls]
noverify=true

[sasl.mysasl]
username="burrow"
password="burrow"
handshake-first=true

[cluster.local]
class-name="kafka"
servers=[ "ak.example.com:9092" ]
client-profile="test"

[consumer.local]
class-name="kafka"
cluster="local"
servers=[ "ak.example.com:9092" ]
client-profile="test"
group-blacklist="^(console-consumer-|python-kafka-consumer-|quick-).*$"
group-whitelist=""

[consumer.local_zk]
class-name="kafka_zk"
cluster="local"
servers=[ "ak.example.com:2181" ]
zookeeper-path="/opt/kafka/data"
zookeeper-timeout=30
group-blacklist="^(console-consumer-|python-kafka-consumer-|quick-).*$"
group-whitelist=""

[httpserver.default]
address=":80"

[storage.default]
class-name="inmemory"
workers=20
intervals=15
expire-group=604800
min-distance=1

I have also added this part to my Jaas config:

org.apache.kafka.common.security.plain.PlainLoginModule required
  username="burrow"
  password="burrow";

Finally, this is what my server.properties looks like:

listeners=SASL_PLAINTEXT://ak.example.com:9092
     security.inter.broker.protocol=SASL_PLAINTEXT
     sasl.mechanism.inter.broker.protocol=GSSAPI
     sasl.enabled.mechanisms=GSSAPI,PLAIN
     sasl.kerberos.service.name=kafka
advertised.listeners=SASL_PLAINTEXT://ak.example.com:9092
allow.everyone.if.no.acl.found=true
principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
security.protocol=SASL_PLAINTEXT
super.users=user:kafka,kafkausr

shubhamvasaikar avatar Jul 31 '18 19:07 shubhamvasaikar

Same issue here. Any solution yet?

Sammy2005 avatar Feb 07 '20 09:02 Sammy2005

I am also facing the same issue. Any solutions for this? @shubhamvasaikar @Sammy2005

akku16 avatar Nov 04 '20 06:11 akku16

运行一段时间就出现这个问题

twz999 avatar Jun 03 '21 07:06 twz999

运行一段时间就出现这个问题然后kafka一个节点死了

twz999 avatar Jun 03 '21 07:06 twz999

anyone figure it out?

ralyodio avatar Sep 09 '22 03:09 ralyodio