kafka-connect-cosmosdb icon indicating copy to clipboard operation
kafka-connect-cosmosdb copied to clipboard

[BUG] Bulk Execution fails in Sink Connector for partition key that is not "id"

Open TheovanKraay opened this issue 1 year ago • 0 comments

Description

Bulk Execution is failing when containers are partitioned by anything other than "id".

Error Message: [2023-04-27 07:47:27,333] ERROR Could not upload record to CosmosDb, but tolerance is set to all. Error message: Unable to write record to CosmosDB: {null}, value schema {null}, exception {{'ClassName':'BulkOperationFailedException','userAgent':'azsdk-java-cosmos/4.42.0 Linux/3.10.0-1160.88.1.el7. x86_64 JRE/11.0.8','statusCode':400,'resourceAddress':null,'innerErrorMessage':'Request failed with effectiveStatusCode: {400}, effectiveSubStatusCod e: {0}, kafkaOffset: {10}, kafkaPartition: {1}, topic: {test_topic}','causeInfo':null,'responseHeaders':'{x-ms-substatus=0}'}} (com.azure.cosmos.kafka.connect.sink.CosmosDBSinkTask)

Expected Behavior

Bulk execution should work for any partition key selection.

Reproduce

Send messages to Cosmos DB via sink connector where container is partitioned by some value other than "id".

Additional Context

Workaround: disabling bulk execution when adding Connector in Kafka Connect environment avoids the issue:

"connect.cosmos.sink.bulk.enabled": false

TheovanKraay avatar May 03 '23 18:05 TheovanKraay