Andrea Cosentino
Andrea Cosentino
> @oscerd Doesn't look like setting `converter` to `org.apache.kafka.connect.converters.ByteArrayConverter` worked for us either. Try to build the project with the fix suggested above.
We are investigating what could the problem. @valdar is looking at that
Both of the options are valid: https://camel.apache.org/components/4.0.x/salesforce-component.html#_endpoint_query_option_replayId https://camel.apache.org/components/4.0.x/salesforce-component.html#_endpoint_query_option_defaultReplayId The original Kamelet, used to build the connector, doesn't expose the replayId option: https://github.com/apache/camel-kamelets/blob/main/kamelets/salesforce-source.kamelet.yaml We can add the option, but just for...
https://github.com/apache/camel-kamelets/blob/main/kamelets/azure-servicebus-source.kamelet.yaml https://github.com/apache/camel-kamelets/blob/main/kamelets/azure-servicebus-sink.kamelet.yaml
For Avro you can transform the pojo into a Generic Avro Record through this SMT https://github.com/apache/camel-kafka-connector/blob/main/core/src/main/java/org/apache/camel/kafkaconnector/transforms/SinkPojoToSchemaAndStructTransform.java For Json you need to convert the record json in a SMT, you need...
An answered on the other issue: You need to set an header on record: CamelAzureStorageDataLakeFileName, the header name should be "CamelHeader.CamelAzureStorageDataLakeFileName", it will be transformed while running the sync.
Yes, it should be an header in the kafka record. I don't know If I'll have time to write an example.
Try by adding an header of this form: "CamelHeader.file"
I don't have to create a reproducer and work on reproducing your use case. It's too much time and I don't have it. Also it requires an Azure Storage account...
you don't need to have camelHeader in the config. You just need the header named "CamelHeader.file", also you don't need charset, fileName and keyName, they're doing nothing.