kafka-connect-jdbc icon indicating copy to clipboard operation
kafka-connect-jdbc copied to clipboard

Error ORA-01461 when BLOB writes to Oracle

Open xionglizhi opened this issue 4 years ago • 1 comments

This fix solves the CLOB problem, but the blob still reports the same error。

image

2020-10-19 10:06:09,099] INFO Setting metadata for table "ZPF_TEST_101301_DY" to Table{name='"ZPF_TEST_101301_DY"', type=TABLE columns=[Column{'id', isPrimaryKey=true, allowsNull=false, sqlType=NUMBER}, Column{'depart', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR2}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR2}, Column{'depart_id', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR2}, Column{'sal_id', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR2}, Column{'salary', isPrimaryKey=false, allowsNull=true, sqlType=NUMBER}, Column{'sex', isPrimaryKey=false, allowsNull=true, sqlType=VARCHAR2}, Column{'update_time', isPrimaryKey=false, allowsNull=true, sqlType=DATE}, Column{'wjnr', isPrimaryKey=false, allowsNull=true, sqlType=BLOB}]} (io.confluent.connect.jdbc.util.TableDefinitions:64) [2020-10-19 10:06:09,137] WARN Write of 22 records failed, remainingRetries=10 (io.confluent.connect.jdbc.sink.JdbcSinkTask:92) java.sql.BatchUpdateException: ORA-01461: can bind a LONG value only for insert into a LONG column

at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:10032)
at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1364)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9839)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:234)
at io.confluent.connect.jdbc.sink.BufferedRecords.executeUpdates(BufferedRecords.java:219)
at io.confluent.connect.jdbc.sink.BufferedRecords.flush(BufferedRecords.java:185)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:109)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:73)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:84)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:545)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

xionglizhi avatar Oct 19 '20 03:10 xionglizhi

Did you find the solution to this? I am facing this issues as well in upsert mode..

vijayanandnandam avatar Feb 22 '22 18:02 vijayanandnandam

kafka-connect-jdbc-10.7.1.jar ojdbc8-19.7.0.0.jar I am facing this issues as well in merge

18015290123 avatar May 26 '23 03:05 18015290123

这个问题,我解决了,希望能给你参考。目标表的字段,如果设置为NCLOB,而不是CLOB,那就不会报错。原理还不清楚。

18015290123 avatar May 31 '23 02:05 18015290123