kafka-connect-fs
kafka-connect-fs copied to clipboard
Kafka Connect FileSystem Connector
You connector is probably one of the better explained connectors available and I really appreciate the work you have put into this. Unfortunately, I am having a problem getting it...
A file is uploaded to a folder in the S3 bucket s3://bucket-name/folder1/folder2/ The SQS queue gets a notification, but, the connector doesn't do anything. ``` connector.class=com.github.mmolimar.kafka.connect.fs.FsSourceConnector policy.s3_event_notifications.poll=30000 errors.log.include.messages= true policy.regexp=.*...
Hello @mmolimar, I see in the source code a couple of places where the task can fail more or less silently from a kafka connect point of vue as the...
Hi Can source connector handle formatted JSON records. I have file contains JSON records like this { "Employee":{ "header":{ "username":"abc", "reference":"1d0d5113-3d22-44c0-99f5-f6ff642f0233" }, "payload":{ "name":"hello", "sal":"" } } } { "Employee":{...
We regularly perform security scans on Confluent Hub connectors, as per Confluent’s security policy. Unfortunately this connector has been flagged as having unacceptable vulnerabilities and our policy is to escalate...
Hi @mmolimar While using fs source connector from hive table hdfs location it's not able to fetch partition column . Please provide any solution or way to fix this ....
Below is my connect-file-source.properties: name=local-file-source1 connector.class=com.github.mmolimar.kafka.connect.fs.FsSourceConnector tasks.max=1 fs.uris=file:///data/test_file topic=connect-test1 policy.class=com.github.mmolimar.kafka.connect.fs.policy.SleepyPolicy policy.sleepy.sleep=50000 policy.recursive=true poll.interval.ms=0 policy.regexp=.* policy.batch_size=0 #policy.cleanup=none file_reader.class=com.github.mmolimar.kafka.connect.fs.file.reader.TextFileReader file_reader.batch_size=0 when I run echo "bbb" >>/data/test_file/1.txt echo "ddd" >> /data/test_file/1.txt I got...
Fix issues #101 By using new instance of URI to replace usage of workingDir.toUri().
fixes #100 Currently, the parquet file batcher calls `hasNext` while seeking the file, which itself checks if `seeked == true`. This leads to the filereader repeatedly reading the second batch...
Hello @mmolimar, Do you have example of configuration to read ebcdic files and put the content in a Kafka topic? I understand we can use this connector to do that....