kafka-connect-storage-common icon indicating copy to clipboard operation
kafka-connect-storage-common copied to clipboard

Shared software among connectors that target distributed filesystems and cloud storage.

Results 80 kafka-connect-storage-common issues
Sort by recently updated
recently updated
newest added

Hi, I just cloned 5.2.1-post and tried running DefaultPartitionerTest and HourlyPartitionerTest and both them failed with "PartitionException" as shown in image below. Although I did try to build the repo...

We are using DailyPartitioner for our HDFS sink connector with hive integration. The source of a topic is from Debezium that captures the change from a source table that contains...

> [_From StackOverflow_](https://stackoverflow.com/q/53329362/2308683) Naturally, one might try to use `RegexRouter` to send multiple topics to a single directory. Say, data coming from JDBC Source connector ```json "topics": "SQLSERVER-TEST-TABLE_TEST", "transforms":"dropPrefix", "transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",...

Hi. i am using HDFS sink connector to put my data into hadoop from kafka. and i am using Fieldpartitioner with a field name. kafka connect creates field name in...

The issue being fixed is that the way this interface is currently designed leads to Partitioner being effectively un-extensible, unless they don't need any parameters at all except those defined...

The `Partitioner` interface design is inefficient. The `generatePartitionedPath` takes a topic, which is immutable per task, plus an `encodedPartition`, which is per-record. That leads to things such as confluentinc/kafka-connect-hdfs#224, in...

AvroDataConfig's SCHEMAS_CACHE_SIZE_CONFIG is not the same as StorageSinkConnectorConfig's SCHEMA_CACHE_SIZE_CONFIG, which leads to AvroDataConfig being created without the correct data for that configuration parameter.

doing things like: ``` io.confluent kafka-connect-avro-converter ${project.version} ``` can lead to really weird errors in projects that declare kafka-connect-storage-common-parent as the parent in the pom. because `project.version` gets resolved to...

When creating an `AvroDataConfig` out of the values present in `StorageSinkConnectorConfig`, the method `avroDataConfig()` uses `StorageSinkConnectorConfig` as keys in the map, and, thus, end up not passing the schema cache...