kafka-connect-hdfs icon indicating copy to clipboard operation
kafka-connect-hdfs copied to clipboard

Caused by: org.apache.avro.AvroRuntimeException: not open

Open ErwinLee10 opened this issue 4 years ago • 0 comments

Hi,

was having issue running the connector with the following error:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.avro.AvroRuntimeException: not open
	at org.apache.avro.file.DataFileWriter.assertOpen(DataFileWriter.java:88)
	at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:311)
	at io.confluent.connect.hdfs.avro.AvroRecordWriterProvider$1.write(AvroRecordWriterProvider.java:83)
	at io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:721)
	at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:384)
	at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:386)
	at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:124)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
	... 10 more

Here is the configuration that I used:

connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
schema.registry.url=http://localhost:8071
value.converter.schema.registry.url=http://localhost:8071
flush.size=1
topics=test321
tasks.max=1
hdfs.url=hdfs://localhost:8020
value.converter=io.confluent.connect.avro.AvroConverter
key.converter=org.apache.kafka.connect.storage.StringConverter

ErwinLee10 avatar Feb 17 '21 08:02 ErwinLee10