kafka-connect-jdbc
kafka-connect-jdbc copied to clipboard
Cannot insert a message longer than 32K to CLOB (Oracle)
Hello,
In the study of the functionality of the sink kafka-connect-jdbc with Oracle: cannot insert a message longer 32767 bytes to CLOB column. I think that the problem with the binding statement for CLOB column to string in GenericDatabaseDialect.java. If the field length is less than 32767 there are no problems with inserting, otherwise must use setClob instead setString.
Maybe I'm wrong, please tell me how to get around this problem ?
Error: ORA-01461: can bind a LONG value only for insert into a LONG column
Driver: ojdbc7.jar
Sink: connector.class=io.confluent.connect.jdbc.JdbcSinkConnector table.name.format=TEST topics.regex=^CONNECT-TEST$ auto.create=true auto.evolve=true name=TEST-SINK insert.mode=upsert pk.mode=record_value pk.fields=RECORD_ID connection.url=jdbc:oracle:thin:@x.x.x.x:1521:x connection.user=x connection.password=x
Table: CREATE TABLE "TEST" ( "RECORD_ID" NUMBER(*,0) NOT NULL, "SOURCECODE" CLOB NULL, PRIMARY KEY("RECORD_ID")) Schema: key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 { "subject": "CONNECT-TEST-value", "version": 1, "id": 963, "schema": "{"type":"record","name":"TEST","fields":[{"name":"RECORD_ID","type":{"type":"bytes","scale":0,"precision":64,"connect.version":1,"connect.parameters":{"scale":"0"},"connect.name":"org.apache.kafka.connect.data.Decimal ","logicalType":"decimal"}},{"name":"SOURCECODE","type":["null","string"],"default":null}],"connect.name":"TEST"}" } Topic: kafka-avro-console-consumer --bootstrap-server x.x.x.x:9092 --topic CONNECT-TEST --offset=0 --partition 0 --max-messages=1 {"RECORD_ID":"\u0001","SOURCECODE":{"string":"GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG…....More 32K.......GGGG}}
I think the ORA-01461 error is occurring because of how RECORD_ID is formatted. I don't think this has anything to do with the SOURCECODE string.
I think the ORA-01461 error is occurring because of how RECORD_ID is formatted. I don't think this has anything to do with the SOURCECODE string.
Hi wicknicks.
I do not think so. Why then there are no errors if SOURCECODE is less 32K ?
See a similar issue in Spring Framework https://github.com/spring-projects/spring-framework/issues/16854 They started using setClob for big strings.
Hello, i have the same issue in "upsert" mode but in "insert" it works fine. I hope that this detail will help.
Regards
Any update or workaround on this ? I would like something like "insert and ignore on constraint" so I have to use merge. Insert throws a lot of ORA-00001: unique constraint (AOO_TESTS.SYS_C005543) violated
and make the task fail.
No solution yet ? I have run into the same issue. Note that since the error is thrown by the sink connector we can not use errors.tolerance=all property cause it will not help.
Did we got any solution for this problem?. As we cannot go with the insert.mode = "insert" it will cause problems during updates.
Because I did not find any solution on this I decided to go with a dirty hack : https://docs.confluent.io/platform/current/connect/transforms/filter-confluent.html
In the connector configuration I added : transforms: filterSpammer transforms.filterSpammer.type: io.confluent.connect.transforms.Filter$Value transforms.filterSpammer.filter.condition: $[?(@.mail_body =~ /.{32001,}/i)] transforms.filterSpammer.filter.type: exclude transforms.filterSpammer.missing.or.null.behavior: include
This will skip the messages from the topic that have a field with size > 32000 bytes. Of course, this is just to not break the connector which was my case but you will lose all messages with fields > 32000 bytes.
If the messages are JSON, that will make them unparseable.
Have the same issue here - cannot insert any json-data to a CLOB database column because of this exception in UPSERT
mode:
java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column
INSERT
mode seems to work okay.
Do we have any news on the issue? Seems to have been also mentioned in #714
It seems to be fixed and merged in #925, and released with version 10.0.2
- at least in my tests with Oracle it works now, when I upgraded from 10.0.1
to 10.0.2
.
Any solution for this issue. I am facing same problem. confluent version - 5.5.x JDBC Drier version - OJDBC8.jar Oracle version - 12.2.0.1
@aashokggupta, which version of JDBC connector are you using?
Upgrade to the latest version of the connector, it should work there.
Where can i check the versions of JDBC connector. Can you please share. I am using 5.5.1 version of jdbc connector
这个问题,我解决了,希望能给你参考。目标表的字段,如果设置为NCLOB,而不是CLOB,那就不会报错。原理还不清楚。
Hi All Any news about this issue? I get the error on 10.7.4 version With NCLOB type it also doesn't work for me