xiaofan2012
xiaofan2012
Yes, I'm looking at the code
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
Support for major connectors such as flink-cdc,flink-jdbc,flink-hive,flink-hudi mainstream sql field type mapping
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
Purpose: sql generation function automatically generates synchronous sql (type mapping) based on connector Plan: First support jdbc type sql generated field type mapping
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
I can do the backend. The frontend?
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
The front end adds different connector types (such as CDC, JDBC, Hive, Hudi) when generating SQL statements, and the back end maps the corresponding fields according to the official documentation...
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
yes
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
If you want to add flink-cdc type mapping, is to add a new flink-cdc-meta, and then put the corresponding logic into flink- CDC-meta?
[Feature][Flink] DataSource sql generated supports cdc and jdbc task different type conversion rules
Is it possible to maintain different conversion logic for different sources (e.g. flink-cdc,jdbc,hive,hudi) in the conversion logic corresponding to the connector?
I found that because JsonDebeziumDeserializationSchema this class in the treatment of the deserialization is oracle number type will be a problem, what's the good way to deal with i t?...
"consistent_bucket_write: test.fin_ipr_inmaininfo_test (1/2)#0" Id=89 TIMED_WAITING on java.util.LinkedList@37d9fd7 at java.lang.Object.wait(Native Method) - waiting on java.util.LinkedList@37d9fd7 at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:924) at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:692) at org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:587) at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:145) at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:145) at org.apache.hudi.common.table.log.HoodieLogFormatWriter.flush(HoodieLogFormatWriter.java:261) at org.apache.hudi.common.table.log.HoodieLogFormatWriter.appendBlocks(HoodieLogFormatWriter.java:194) at org.apache.hudi.io.HoodieAppendHandle.appendDataAndDeleteBlocks(HoodieAppendHandle.java:479)...