Ryan CrawCour
Ryan CrawCour
another option would be to add a withStartTime configuration option that would allow a user to set a specific start date and time. to read from beginning of the life...
good request. makes sense.
sounds like a bug in the CosmosDB ChangeFeed logic. either internally in their Java SDK, or in our implementation of the ChangeFeed consumer. have you tracked down where this is...
Now if you take that document out of Kafka again with the Sink connector and write it back to Cosmos DB you'll get it in Cosmos DB, with _lsn added...
We should also ensure that the Source connector handles this Invalid JSON error correctly. Need to think about what "correctly" is. Can we, log the error, dead letter this document...
will implement additional code in source connector to filter system properties from the jackson object property bag before passing records to Kafka exact list of properties still to be defined...
suggested work around for now until this is implemented - use a Single Message Transform (as documented) to filter out properties that you wish to filter. OR have Connect output...
follow pattern done by the Spark 3 connector to config include system properties and timestamps https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cosmos/azure-cosmos-spark_3-1_2-12/docs/configuration-reference.md
It appears this is quite a common pattern in sink connectors to provide batching as a knob for tuning throughput. Connectors such as S3 sink, mongodb sink, JDBC sink, HTTP...
The Java SDK for Cosmos DB does not have batching support, yet. It is currently in preview, once this feature has been stabilized and released as GA then we can...