Ryan CrawCour
Ryan CrawCour
If the user configures `max.poll.records` to adjust the size of the batches read from Kafka, and then the connector exposes a `batch.size` to control its batch size that is written,...
thanks for raising this, will need to take a look
should look to implement deadletter queuing provided by Kafka Connect natively through errors.tolerance config as has been done in the Sink connector now.
Thank you for bringing this to our attention. when you set your path for the id field, what did you set it to? it looks to me like you've set...
Thanks Chris. Glad you have found a suitable workaround now using SMTs. We will need to take a look at the strategies to ensure they are working as expected.
ProvidedInValueStrategy & InKeyStrategy do not appear to be working as expected. We need to be able to configure a path to the id field, in both the key & value....
This should be easy to implement. Take the provide a config for topicMap, and parse the string on topic#container. Need to consider what to do if - a) that config...
@NathanNam could you provide a bit more context of what you are trying to achieve with this? Are you trying to capture the delete operation happening in the database? Cosmos...
> Or, are you trying to initiate a delete operation in the database by placing a "tombstone" message in to a Kafka topic? thanks @NathanNam so it is this scenario...
so doc id (or primary key) **and** partition key should both be set for Cosmos DB to do delete efficiently. so you want them to **both** come from the Kafka...